Automated Hiring Systems and the Challenge of Mitigating Discrimination: A UK Perspective
The paper "What does it mean to `solve' the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems" presents a critical examination of automated hiring systems (AHSs) and their claims of bias mitigation, focusing on three prominent systems: HireVue, Pymetrics, and Applied. The authors meticulously analyze the design and validation processes of these AHSs to explore how they attempt to address discrimination and promote diversity and inclusion in hiring practices.
Claims of Bias Mitigation
This investigation reveals a nuanced understanding of bias mitigation tactics employed by AHSs. These systems claim to reduce human bias by leveraging data-driven methodologies, thus supposedly providing a fairer and more objective assessment of candidates. For example, Pymetrics uses neuroscience-based games to evaluate candidates' cognitive, social, and emotional traits and applies statistical tests to ensure these metrics do not disproportionately disadvantage any protected groups. It employs audit-AI for regulatory compliance, following the US Equal Employment Opportunity Commission's 4/5ths rule.
Similarly, HireVue uses video interviews and games to profile candidates, focusing on various categorical, audio, and video features. The system removes indicators linked to adverse impacts and introduces fairness constraints in the learning algorithm to ensure parity across protected groups.
Applied, meanwhile, addresses discrimination and bias through a different lens. It does not automate candidate assessment but provides a platform for monitoring biases by analyzing gendered language, employing anonymization techniques, and offering visual analytics to highlight potential biases during the hiring process.
Limitations and Challenges
The paper identifies inherent limitations in these systems' approaches to bias mitigation. These include reliance on historical data from "best-performing" or "fit" employees—data that may perpetuate existing biases and discrimination. Furthermore, the definition of fairness and bias mitigation varies with differing interpretations of what constitutes discrimination, often neglecting intersectionality in group identification, which can lead to biased outcomes for multi-faceted identities.
A notable computational challenge is the difficulty of conceptualizing fairness metrics that truly capture the social dimensions of discrimination. The paper articulates that metrics focusing on disparity in passing rates (disparate impact) may not accurately reflect deeper disparate treatment or mistreatment issues, such as false negative rejections.
Legal Context
The paper’s discussion of the UK legal framework highlights significant discrepancies between UK/EU legal standards and the US-centric foundations upon which these systems are built. UK law's concept of discrimination often diverges from the statistical rule-based approach seen in the US, and intellectual property legal frameworks in the UK and EU may be ill-fit for software systems that derive from US models. More crucially, under the UK's General Data Protection Regulation (GDPR), candidates are entitled to transparency and potentially a "right to an explanation" under Article 22, challenging the opacity commonly inherent in AHS operations.
Implications for Future Developments
This research invites broader reflection on the deployment of AHSs and the necessity of incorporating socio-legal dimensions into technological designs. In light of advancing AHS prevalence, a crucial immediate consideration is whether these systems are adequately equipped to uphold fundamental rights in employment contexts, both theoretically and practically.
Furthermore, there is a call for more comprehensive evaluations of UK and EU-developed systems, alongside robust qualitative analyses to gauge actual practices and impacts on employers and employees. Future studies should aspire to bridge computational constraints with contextual legal scrutiny to ensure ethically aligned progress in hiring technologies.
Conclusively, the urgent need to assess the burgeoning use of AHSs underlines the importance of balancing innovation with the protection of candidates' rights, fostering environments where technology aids rather than hinders equitable access to employment.