Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Algorithmic Imprint (2206.03275v1)

Published 3 Jun 2022 in cs.CY, cs.AI, and cs.HC

Abstract: When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences. We operationalize this concept and its implications through the 2020 events surrounding the algorithmic grading of the General Certificate of Education (GCE) Advanced (A) Level exams, an internationally recognized UK-based high school diploma exam administered in over 160 countries. While the algorithmic standardization was ultimately removed due to global protests, we show how the removal failed to undo the algorithmic imprint on the sociotechnical infrastructures that shape students', teachers', and parents' lives. These events provide a rare chance to analyze the state of the world both with and without algorithmic mediation. We situate our case study in Bangladesh to illustrate how algorithms made in the Global North disproportionately impact stakeholders in the Global South. Chronicling more than a year-long community engagement consisting of 47 inter-views, we present the first coherent timeline of "what" happened in Bangladesh, contextualizing "why" and "how" they happened through the lenses of the algorithmic imprint and situated algorithmic fairness. Analyzing these events, we highlight how the contours of the algorithmic imprints can be inferred at the infrastructural, social, and individual levels. We share conceptual and practical implications around how imprint-awareness can (a) broaden the boundaries of how we think about algorithmic impact, (b) inform how we design algorithms, and (c) guide us in AI governance.

Examining the Consequences of Algorithmic Decisions: An Analysis of the Algorithmic Imprint

The concept of the "algorithmic imprint," introduced in this paper, addresses the persistent impact of algorithmic systems beyond their operational phases. The research contextualizes this notion using the 2020 algorithmic grading debacle of the GCE A Level exams, a high-stakes academic evaluation process impacted significantly by COVID-19 disruptions. The analysis is set in Bangladesh, a context where the effects of such algorithmic decisions resonate more acutely in the wake of infrastructural disparities between the Global North where the algorithm is designed and the Global South where it is implemented.

The paper presents a comprehensive and critical examination of the algorithmic grading process, including the implementation of Teacher Assessed Grades (TAGs), ordinal student rankings, and algorithmic standardization methods intended to compensate for canceled in-person exams. These interventions, while seemingly temporary, established significant procedural and psychological imprints. The paper adds depth to the algorithmic fairness discourse by revealing the aftereffects of algorithmic interventions, particularly in scenarios where an algorithm is removed but its impacts remain intact.

Strong Numerical and Conceptual Insights

The paper is grounded on extensive fieldwork, including 47 in-depth interviews over a year long period with Bangladeshi students and educators, supported by more than 100 informal engagements. It offers the first coherent timeline of how the GCE A Level controversy unfolded in Bangladesh and analyzes consequent events through the algorithmic imprint's lens, thereby addressing a gap in mainstream media reporting that primarily focused on events in the UK.

Empirical findings confirm that algorithmic standardization mechanisms, once operationalized, normalize practices and construct new data infrastructures (e.g., historical student performance records) that endure beyond algorithm discontinuation. The authors argue that such imprints emerge at infrastructural and social levels and even influence individual psychological states, as seen with teachers devoid of agency or recognition in this grading scenario despite their laborious contributions.

Theoretical and Practical Implications for AI Development

The framework of the algorithmic imprint offers a profound repositioning of how we conceptualize the impact of algorithmic systems. The imprint extends the boundary of algorithmic impact assessments beyond technical parameters to include continuous socio-political effects and practices that sustain these systems. The elucidation of this imprint prompts reflection in AI governance, advocating for regulated frameworks that consider the algorithmic afterlife.

For design and deployment of algorithmic systems, imprint-awareness proposes a proactive, participatory design ethos that emphasizes equitable infrastructural adaptations, transparency, and accountability, akin to human-centered and sociotechnically informed development processes. Future algorithmic audits and assessments could benefit from incorporating imprint-awareness by foreseeing potential socio-infrastructural transformations and stakeholder disenfranchisement.

Future Directions

This paper lays the groundwork for further inquiry into the socio-technical effects wrought by imprints, encouraging the development of a framework with broader applicability across domains that leverage algorithmic decision-making. Researchers are invited to extend this inquiry, exploring how societal structures, particularly in the context of historically marginalized geographies, are reshaped by the implementation and subsequent removal of algorithmic interventions. Such perspectives could enrich our understanding of algorithmic systems' lifecycle and enforce comprehensive narratives pivotal for policy-making and ethical AI practices worldwide.

In sum, by formalizing the concept of the algorithmic imprint, this paper urges the AI community to reconsider the temporal and spatial boundaries of AI impact, thereby setting a new paradigm in critically evaluating and governing the full lifecycle of algorithmic systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Upol Ehsan (16 papers)
  2. Ranjit Singh (6 papers)
  3. Jacob Metcalf (5 papers)
  4. Mark O. Riedl (57 papers)
Citations (26)
X Twitter Logo Streamline Icon: https://streamlinehq.com