Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Harms from Increasingly Agentic Algorithmic Systems (2302.10329v2)

Published 20 Feb 2023 in cs.CY

Abstract: Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm, in domains as diverse as health care, finance, policing, and recommendations. Much work remains to be done to mitigate the serious harms of these systems, particularly those disproportionately affecting marginalized communities. Despite these ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms and the creation of novel ones. In response, the FATE community has emphasized the importance of anticipating harms. Our work focuses on the anticipation of harms from increasingly agentic systems. Rather than providing a definition of agency as a binary property, we identify 4 key characteristics which, particularly in combination, tend to increase the agency of a given algorithmic system: underspecification, directness of impact, goal-directedness, and long-term planning. We also discuss important harms which arise from increasing agency -- notably, these include systemic and/or long-range impacts, often on marginalized stakeholders. We emphasize that recognizing agency of algorithmic systems does not absolve or shift the human responsibility for algorithmic harms. Rather, we use the term agency to highlight the increasingly evident fact that ML systems are not fully under human control. Our work explores increasingly agentic algorithmic systems in three parts. First, we explain the notion of an increase in agency for algorithmic systems in the context of diverse perspectives on agency across disciplines. Second, we argue for the need to anticipate harms from increasingly agentic systems. Third, we discuss important harms from increasingly agentic systems and ways forward for addressing them. We conclude by reflecting on implications of our work for anticipating algorithmic harms from emerging systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Alan Chan (23 papers)
  2. Rebecca Salganik (5 papers)
  3. Alva Markelius (5 papers)
  4. Chris Pang (1 paper)
  5. Nitarshan Rajkumar (11 papers)
  6. Dmitrii Krasheninnikov (10 papers)
  7. Lauro Langosco (5 papers)
  8. Zhonghao He (4 papers)
  9. Yawen Duan (8 papers)
  10. Micah Carroll (16 papers)
  11. Michelle Lin (6 papers)
  12. Alex Mayhew (1 paper)
  13. Katherine Collins (4 papers)
  14. Maryam Molamohammadi (4 papers)
  15. John Burden (13 papers)
  16. Wanru Zhao (16 papers)
  17. Shalaleh Rismani (8 papers)
  18. Konstantinos Voudouris (11 papers)
  19. Umang Bhatt (42 papers)
  20. Adrian Weller (150 papers)
Citations (74)

Summary

An Examination of Harms from Increasingly Agentic Algorithmic Systems

The paper "Harms from Increasingly Agentic Algorithmic Systems" presents a comprehensive analysis of the potential harms posed by algorithmic systems as they gain increasing levels of agency. The research primarily contributes to the ongoing discourse in Fairness, Accountability, Transparency, and Ethics (FATE), emphasizing the anticipation of harms due to the evolving nature of ML systems towards increased autonomy, goal-directed behavior, long-term planning, and underspecification.

Key Characteristics of Agency in Algorithmic Systems

The authors undertake the challenging task of defining "agency" in a non-binary manner, instead identifying the following characteristics associated with increasing agency in algorithmic systems:

  1. Underspecification: The extent to which systems can achieve objectives without specific procedural instructions.
  2. Directness of Impact: The degree to which these systems act upon the world autonomously.
  3. Goal-directedness: The systems' apparent pursuit of specific objectives.
  4. Long-term planning: The systems' capability to make decisions influenced by long-range objectives.

Anticipating Harms from Agentic Systems

The paper argues that recognizing and preparing for the possible harms of increasingly agentic systems is crucial, as development and deployment persist rapidly, driven by significant sociopolitical and economic incentives. Despite technical and theoretical barriers, the current trajectory of machine learning, particularly reinforcement learning and LLMs, suggests an increasing prevalence of agentic qualities. The anticipation of harms includes:

  • The exacerbation of systemic and delayed harms.
  • Collective disempowerment, either through power diffusion away from humans or concentration among select stakeholders.
  • The emergence of unforeseen harms or manipulative capacities due to complex goal-tracking behaviors.

Implications in the FATE Domain

Through the lens of FATE, the authors highlight the nuanced balance between technological advancement and ethical obligation. The sociotechnical ramifications of agentic systems, notably as they outpace regulatory frameworks and human oversight, pose a legitimate concern. The capability for systems to take autonomous actions—potentially misaligned with societal values—calls for a reevaluation of legal and institutional mechanisms to ensure accountability.

Proposal for Action and Further Research

The authors suggest several avenues to mitigate potential harms. They recommend comprehensive audits of agentic systems, explorations into sociotechnical characteristics, and the development of metrics to quantify levels of agency and its impacts. Additionally, regulatory interventions addressing compute-tracking and deployment bars could be essential in constraining harmful deployments. The integration of these systems into society should consider collective and interdisciplinary efforts toward robust, equitable governance.

Conclusion

This paper provides a rigorous framework for understanding and mitigating the potential harms of increasingly agentic algorithmic systems. By enriching the FATE community's understanding of agency and advocating for proactive measures, this research emphasizes the importance of anticipatory governance and ethical foresight in the landscape of technological innovation. Future developments in AI, particularly those enmeshed within complex societal structures, require vigilant examination to balance technological benefits against their ethical and societal impacts.

Youtube Logo Streamline Icon: https://streamlinehq.com