Essential and Sufficient Autonomy (PESA)
- PESA is a principle that defines the precise level of autonomy an agent needs to achieve its goals safely and efficiently.
 - It integrates adaptive control, formal verification, and human oversight to balance performance with safety and ethical constraints.
 - Its implementation spans sectors like transportation, healthcare, and space, guiding system design, assurance, and ethical analyses.
 
The Principle of Essential and Sufficient Autonomy (PESA) encapsulates a foundational approach to the engineering, deployment, and ethical analysis of autonomous systems. Its central tenet is that any autonomous agent—physical, digital, or hybrid—must possess exactly the autonomy necessary to reliably achieve its functional goals in unpredictable environments, while not exceeding limits that undermine safety, social compatibility, or operational trustworthiness. PESA serves as both a normative and practical guideline for the integration and optimization of autonomy across diverse domains, influencing methodologies from system design, assurance, human-machine interaction, and the operationalization of ethical constraints.
1. Conceptual Foundations and Formalization
PESA is not directly named in most research literature but is implicitly addressed in discussions distinguishing autonomy from automation. Autonomy is characterized by adaptive, goal-driven operation in dynamic environments versus the rigid execution of pre-specified rule sets in automation (Hager et al., 2016). The essential aspect of autonomy is the minimal set of independent functionalities required for an agent to achieve assigned goals, while sufficiency is the upper bound at which additional autonomy ceases to benefit—be it due to safety, emergent complexity, or loss of human oversight.
A canonical formal framing inspired by control system theory often sets out autonomy as a constrained optimization problem:
where are the agent’s control inputs, is a performance or safety cost, and and codify hard and soft constraints including law, safety, and system invariants (Hager et al., 2016). This formalism expresses the operational core of PESA—agents may independently select control actions as long as all essential and sufficient constraints are satisfied.
2. Decision-Making, Learning, and Human Interaction
PESA is realized in practice through decision cycles that incorporate perception, reflection, goal management, planning, and self-adaptation (Harel et al., 2019). In modern autonomous systems, this cycle must balance independence in goal pursuit with the strict adherence to operational rules and social norms—a car navigating traffic and parking itself, yet always obeying traffic laws and yielding to pedestrians (Hager et al., 2016). The capacity for “fail gracefully” behaviors is critical; agents must back off or query for human input when their autonomy would otherwise exceed safety bounds.
Learning to optimize autonomy, particularly in competence-aware systems (CAS), employs introspective models that dynamically update autonomy levels based on online experience and human feedback, formalized as
where is expected cost, is autonomy level, and is the agent’s converged model of human feedback (Basich et al., 2020). Exploration is gated by requirements for human approval to ensure safe expansion of autonomous capabilities (Basich et al., 2020).
3. Assurance, Verification, and System Engineering
Adherence to PESA in safety-critical contexts demands rigorous assurance frameworks. The assurance of autonomy includes formal verification (e.g., model checking, automated test-case and oracle generation), hazard analysis, and assurance cases that are adaptable to evolving system states and learning-enabled modules (Feather et al., 2023). The assurance process embodies PESA by deploying autonomy only when it is essential for mission success and sufficiently verified as safe and reliable.
Formally, assurance may be represented abstractly as
where is the autonomous system, the operating environment, and trust metrics (Topcu et al., 2020). This expression encapsulates the multidimensional factors that must be balanced in implementing PESA.
4. Domains of Application and Sectoral Impact
PESA is referenced implicitly across transportation, healthcare, manufacturing, disaster response, and space systems. In autonomous vehicles, human-centered designs promote shared perception-control, deep personalization, and an explicit acknowledgment of system imperfections—integrating the human into the autonomy loop for optimal safety, trust, and engagement (Fridman, 2018). In socially assistive robots, maintaining user autonomy necessitates the agent’s ability to detect nuanced cues, adapt assistance levels, and transparently explain intent, such that vulnerable populations retain independence, choice, control, and identity (Wilson, 2022).
In spacecraft engineering, “Autonomy at Levels” implements PESA by distributing autonomy loops throughout all system layers. Each loop is tuned to be essential and sufficient for its level—local reflexes for survival, higher-level loops for mission success and collective coordination (Baker et al., 19 Aug 2025). Hierarchical integration mirrors biological systems and systems engineering principles, ensuring robustness and scalability without unnecessary centralization.
5. Ethical, Social, and Human-Centered Implications
Ethical considerations under PESA emphasize the dual explanation of autonomous behavior: causally determined processes and rational deliberation (Hooker, 2018). Agents that can articulate reasons for their actions, naturally and rationally, are owed moral consideration. The universality constraint for ethical action plans prevents incoherent or self-undermining behaviors. In recommender systems and data-driven AI, respecting human autonomy means systems must not undermine individual liberty or agency, and must transparently communicate influences and provide the minimal, sufficient affordances for self-determined choice (Varshney, 2020, Wang et al., 7 Nov 2024).
Empirical studies in global software development identify autonomy as necessary but not alone sufficient for motivation; competence and relatedness must accompany autonomy, suggesting a revision or augmentation of PESA in such socio-technical contexts (Noll et al., 2020).
6. Methodological Trends and Future Research Directions
PESA motivates ongoing research in hybrid decision-making frameworks, resilient agent architectures, data standardization, and benchmarking for collective and contextual learning (Abbeel et al., 2016, Harel et al., 2019). Interdisciplinary approaches that blend model-driven methods and machine learning are favored, especially for handling unpredictability and scaling assurance processes to complex, adaptive environments. The shift toward "design, provisionally verify, continually re-validate" supports lifelong assurance and responsiveness to environmental change (Topcu et al., 2020).
In algorithmic autonomy, research seeks to empower users through agency-fostering infrastructures, participatory design, and mathematical models that quantify and enforce the threshold for self-determined interaction (Wang et al., 7 Nov 2024). The prospect of artificial moral agents—potentially devoid of consciousness but qualifying under operational autonomy indices—raises open philosophical challenges regarding patiency and the sufficiency condition within PESA (Formosa et al., 11 Apr 2025).
7. Limitations, Controversies, and Evolving Interpretations
While PESA provides a rigorous conceptual framework, practical limitations include the specification of autonomy thresholds, real-time validation, and the management of resource allocation in distributed autonomy networks, especially in spacecraft and large-scale cyber-physical systems (Baker et al., 19 Aug 2025). The empirical finding that autonomy is not sufficient for motivation without competence and relatedness indicates that human factors may necessitate context-specific extensions to PESA (Noll et al., 2020). In AI ethics, the distinction between moral agency and patiency, especially regarding non-conscious systems, challenges strict formulations of autonomy and moral status (Formosa et al., 11 Apr 2025).
The ongoing evolution of PESA reflects the dynamic landscape of autonomy research, balancing technical ambition with societal, ethical, and operational constraints. As autonomous systems become ubiquitous, PESA provides a principled foundation for their responsible design, assurance, and integration into human contexts.