Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems (1910.06144v2)

Published 28 Sep 2019 in cs.CY and cs.AI

Abstract: The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of bias mitigation are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective. In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Javier Sanchez-Monedero (3 papers)
  2. Lina Dencik (2 papers)
  3. Lilian Edwards (5 papers)
Citations (126)

Summary

Automated Hiring Systems and the Challenge of Mitigating Discrimination: A UK Perspective

The paper "What does it mean to `solve' the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems" presents a critical examination of automated hiring systems (AHSs) and their claims of bias mitigation, focusing on three prominent systems: HireVue, Pymetrics, and Applied. The authors meticulously analyze the design and validation processes of these AHSs to explore how they attempt to address discrimination and promote diversity and inclusion in hiring practices.

Claims of Bias Mitigation

This investigation reveals a nuanced understanding of bias mitigation tactics employed by AHSs. These systems claim to reduce human bias by leveraging data-driven methodologies, thus supposedly providing a fairer and more objective assessment of candidates. For example, Pymetrics uses neuroscience-based games to evaluate candidates' cognitive, social, and emotional traits and applies statistical tests to ensure these metrics do not disproportionately disadvantage any protected groups. It employs audit-AI for regulatory compliance, following the US Equal Employment Opportunity Commission's 4/5ths rule.

Similarly, HireVue uses video interviews and games to profile candidates, focusing on various categorical, audio, and video features. The system removes indicators linked to adverse impacts and introduces fairness constraints in the learning algorithm to ensure parity across protected groups.

Applied, meanwhile, addresses discrimination and bias through a different lens. It does not automate candidate assessment but provides a platform for monitoring biases by analyzing gendered language, employing anonymization techniques, and offering visual analytics to highlight potential biases during the hiring process.

Limitations and Challenges

The paper identifies inherent limitations in these systems' approaches to bias mitigation. These include reliance on historical data from "best-performing" or "fit" employees—data that may perpetuate existing biases and discrimination. Furthermore, the definition of fairness and bias mitigation varies with differing interpretations of what constitutes discrimination, often neglecting intersectionality in group identification, which can lead to biased outcomes for multi-faceted identities.

A notable computational challenge is the difficulty of conceptualizing fairness metrics that truly capture the social dimensions of discrimination. The paper articulates that metrics focusing on disparity in passing rates (disparate impact) may not accurately reflect deeper disparate treatment or mistreatment issues, such as false negative rejections.

Legal Context

The paper’s discussion of the UK legal framework highlights significant discrepancies between UK/EU legal standards and the US-centric foundations upon which these systems are built. UK law's concept of discrimination often diverges from the statistical rule-based approach seen in the US, and intellectual property legal frameworks in the UK and EU may be ill-fit for software systems that derive from US models. More crucially, under the UK's General Data Protection Regulation (GDPR), candidates are entitled to transparency and potentially a "right to an explanation" under Article 22, challenging the opacity commonly inherent in AHS operations.

Implications for Future Developments

This research invites broader reflection on the deployment of AHSs and the necessity of incorporating socio-legal dimensions into technological designs. In light of advancing AHS prevalence, a crucial immediate consideration is whether these systems are adequately equipped to uphold fundamental rights in employment contexts, both theoretically and practically.

Furthermore, there is a call for more comprehensive evaluations of UK and EU-developed systems, alongside robust qualitative analyses to gauge actual practices and impacts on employers and employees. Future studies should aspire to bridge computational constraints with contextual legal scrutiny to ensure ethically aligned progress in hiring technologies.

Conclusively, the urgent need to assess the burgeoning use of AHSs underlines the importance of balancing innovation with the protection of candidates' rights, fostering environments where technology aids rather than hinders equitable access to employment.

X Twitter Logo Streamline Icon: https://streamlinehq.com