Papers
Topics
Authors
Recent
2000 character limit reached

Harms from Increasingly Agentic Algorithmic Systems

Published 20 Feb 2023 in cs.CY | (2302.10329v2)

Abstract: Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms. Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency -- notably, these include systemic and/or long-range impacts, often on marginalized stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems.

Citations (74)

Summary

  • The paper identifies that increased algorithmic agency—characterized by underspecification, direct impact, goal-directedness, and long-term planning—can exacerbate systemic harms.
  • The research employs a comprehensive FATE framework to analyze how autonomous actions may lead to collective disempowerment and unforeseen manipulative behaviors.
  • It advocates for proactive audits, robust ethical metrics, and targeted regulatory interventions to balance technological innovation with social accountability.

An Examination of Harms from Increasingly Agentic Algorithmic Systems

The paper "Harms from Increasingly Agentic Algorithmic Systems" presents a comprehensive analysis of the potential harms posed by algorithmic systems as they gain increasing levels of agency. The research primarily contributes to the ongoing discourse in Fairness, Accountability, Transparency, and Ethics (FATE), emphasizing the anticipation of harms due to the evolving nature of ML systems towards increased autonomy, goal-directed behavior, long-term planning, and underspecification.

Key Characteristics of Agency in Algorithmic Systems

The authors undertake the challenging task of defining "agency" in a non-binary manner, instead identifying the following characteristics associated with increasing agency in algorithmic systems:

  1. Underspecification: The extent to which systems can achieve objectives without specific procedural instructions.
  2. Directness of Impact: The degree to which these systems act upon the world autonomously.
  3. Goal-directedness: The systems' apparent pursuit of specific objectives.
  4. Long-term planning: The systems' capability to make decisions influenced by long-range objectives.

Anticipating Harms from Agentic Systems

The paper argues that recognizing and preparing for the possible harms of increasingly agentic systems is crucial, as development and deployment persist rapidly, driven by significant sociopolitical and economic incentives. Despite technical and theoretical barriers, the current trajectory of machine learning, particularly reinforcement learning and LLMs, suggests an increasing prevalence of agentic qualities. The anticipation of harms includes:

  • The exacerbation of systemic and delayed harms.
  • Collective disempowerment, either through power diffusion away from humans or concentration among select stakeholders.
  • The emergence of unforeseen harms or manipulative capacities due to complex goal-tracking behaviors.

Implications in the FATE Domain

Through the lens of FATE, the authors highlight the nuanced balance between technological advancement and ethical obligation. The sociotechnical ramifications of agentic systems, notably as they outpace regulatory frameworks and human oversight, pose a legitimate concern. The capability for systems to take autonomous actions—potentially misaligned with societal values—calls for a reevaluation of legal and institutional mechanisms to ensure accountability.

Proposal for Action and Further Research

The authors suggest several avenues to mitigate potential harms. They recommend comprehensive audits of agentic systems, explorations into sociotechnical characteristics, and the development of metrics to quantify levels of agency and its impacts. Additionally, regulatory interventions addressing compute-tracking and deployment bars could be essential in constraining harmful deployments. The integration of these systems into society should consider collective and interdisciplinary efforts toward robust, equitable governance.

Conclusion

This paper provides a rigorous framework for understanding and mitigating the potential harms of increasingly agentic algorithmic systems. By enriching the FATE community's understanding of agency and advocating for proactive measures, this research emphasizes the importance of anticipatory governance and ethical foresight in the landscape of technological innovation. Future developments in AI, particularly those enmeshed within complex societal structures, require vigilant examination to balance technological benefits against their ethical and societal impacts.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 21 likes about this paper.