Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 206 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

When Autonomy Breaks: The Hidden Existential Risk of AI (2503.22151v1)

Published 28 Mar 2025 in cs.CY, cs.AI, and cs.HC

Abstract: AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.

Summary

The Hidden Existential Risk of AI: Human Autonomy and De-skilling

The paper "When Autonomy Breaks: The Hidden Existential Risk of AI" by Joshua Krook presents a nuanced argument highlighting an existential risk often overshadowed by narratives focusing on AI's immediate physical threats to humanity. Krook emphasizes that the gradual erosion of human autonomy due to increasing dependency on AI constitutes an existential risk. This risk emerges as AI progressively outstrips human capabilities in decision-making, social interaction, and leadership.

The gradual disempowerment thesis forms the backbone of the paper, asserting that as AI becomes more proficient, humans are likely to outsource more of their decision-making responsibilities to machines. This thesis suggests a de-skilling process where humans may lose intrinsic faculties such as critical thinking, creativity, and social care—skills traditionally seen as immutable. Krook warns against a future where humans may become akin to machines, driven less by autonomy and more by algorithmic choices. This is comparable to conservatorships where individuals deemed incapable are placed under guardianship for their own welfare.

One of the paper’s central premises is that AI capabilities are advancing at an exponential rate, driven by improvements in computational power adhering to Moore's Law. Current advancements in AI, particularly in LLMs like GPT models, showcase a trajectory where machines are becoming increasingly efficient in tasks once exclusive to human cognition. With LLMs outperforming humans on various benchmarks, Krook argues this trend could lead societies to favor machine decision-making over human autonomy. This increasing reliance poses a paradox: humans could choose between asserting agency leading to suboptimal outcomes or surrendering autonomy for superior decision-making by machines.

Krook explores the implications of widespread decision outsourcing to AI, drawing parallels with legal conservatorships. He vividly illustrates this concept using the case of Britney Spears' conservatorship, where autonomy was revoked in favor of decisions by a guardian. Similarly, in an AI-dominated future, reliance on AGI might put the entire human race under a metaphorical conservatorship, ostensibly for better collective outcomes.

A compelling aspect of Krook's argument is the projection of de-skilling in humanity. Historical patterns from prior technological revolutions illustrate how reliance on technology can lead to the erosion of skills. The Theory of Technology Dominance posits that efficiency gains from technology adoption can diminish human capabilities through reduced cognitive engagement and reliance on automated processes. Krook extends this argument to AI, suggesting that over-dependence on these systems may atrophy cognitive faculties like critical thinking, by offloading tasks traditionally requiring human judgment.

Beyond individual cognitive decline, Krook's vision presents broader societal implications. De-skilling threatens to create a populace ill-equipped to navigate complex ethical and moral landscapes independently. The reduction of human agency in decision-making processes could alter societal structures fundamentally, challenging the axioms of democratic participation and personal responsibility.

While some may argue that outsourcing decisions to AI could enhance efficiency and optimize outcomes, Krook remains steadfast in his cautioning against unbridled surrender of autonomy for perceived benefits. This argument resonates with philosophical insights asserting the inherent value of human freedom and the dangers of excessive reliance on technocratic governance.

In conclusion, Krook advocates for more collaborative frameworks where human decision-making coexists synergistically with AI assistance. He underscores the ethical imperative of preserving human agency despite AI advancements. Future exploration should address balancing AI capabilities with human values to mitigate potential subjugation risks.

This paper contributes significantly to discussions on the future interactions between humans and machines, raising critical questions on autonomy, ethical reasoning, and societal impacts. It encourages deeper examination into safeguarding human autonomy in an AI-augmented world.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube