Papers
Topics
Authors
Recent
2000 character limit reached

Human-in-Command: Retaining Human Authority

Updated 6 November 2025
  • Human-in-Command is a paradigm where humans have exclusive, non-delegable authority over automated and AI systems, ensuring critical oversight.
  • It implements explicit technical and procedural checkpoints that require human intervention, preserving situational awareness and adaptive control.
  • Applications span UAV swarms, robotics, and AI decision-support systems, enhancing safety, legal compliance, and ethical accountability.

Human-in-Command (HIC) designates an organizational, technical, and oversight paradigm in which humans retain ultimate authority, responsibility, and intervention capacity in systems—particularly those with advanced automation or artificial intelligence. HIC is distinguished from Human-in-the-Loop (HITL) and Human-on-the-Loop (HOTL) by its insistence on irreducible human primacy: regardless of the system's autonomy, only a human can authorize, approve, or override actions deemed consequential, especially in high-stakes, risk-prone, or ethically charged contexts. HIC frameworks are central to domains such as command and control (C2), safety-critical robotics, intelligent vehicles, and AI-driven decision-support.

1. Structural Definition and Core Properties

Human-in-Command is defined as a model where the human operator is the final arbiter of system outcomes, with the capacity to direct, correct, or entirely override autonomous actions. HIC systems explicitly institute non-delegable, governance-level oversight; all consequential system outputs, decisions, or communications are contingent on explicit human review and approval (Kandikatla et al., 10 Oct 2025). Unlike models that rely on AI to escalate to the human only under detected uncertainty (HITL) or models that permit discretionary human intervention after-the-fact (HOTL), HIC enforces a structural requirement: actions with significant operational, legal, or ethical impact are only enacted pending direct human sanction (Madison et al., 9 Feb 2024, Wulf et al., 18 Jul 2025).

Key properties include:

  • Non-delegable human authority over system actions.
  • Procedural and technical mechanisms for human review or override.
  • Preservation of human agency, accountability, and situational awareness.
  • Formal, system-enforced checkpoints where human approval is mandatory.

2. Authority Sharing and Adaptive Autonomy

In multi-agent and autonomous systems (e.g., UAV swarms), authority sharing refers to the dynamic allocation of functional responsibility—across stages such as Observe, Orient, Decide, Act (OODA)—between human and system (0811.0335). For each task phase, several modes are possible, ranging from full manual control, through automated execution with human veto rights, to fully autonomous operation. Selection and adjustment of these modes itself constitutes a meta-level authority-sharing challenge: determining who (human or machine) sets the operational mode at any instant.

Maintenance of the HIC paradigm under such distributed autonomy involves:

  • Ensuring human engagement and intervention capability at all decomposition levels and stages.
  • Explicitly modeling and managing the operator's involvement in both mission execution and interaction processes.
  • Implementing adaptive balancing mechanisms that modulate system initiative in accordance with operator workload and mission context, preventing both automation surprise (loss of situation awareness) and operator overload.

3. HIC in Command and Control (C2) of Mixed Human–AI Teams

Within future C2 architectures, HIC is operationalized as the explicit assignment of ultimate command responsibility to humans, especially in scenarios involving mixed teams of humans and "intelligent things" (e.g., autonomous robots, agents) (Kott et al., 2017, Madison et al., 9 Feb 2024). The command process, broken into essential functions—command, control, sensemaking, execution, and monitoring—requires allocation based on complementary strengths. Humans are tasked primarily with establishing organizational intent, managing trust, contextualizing sensemaking, oversight, and negotiating sociopolitical complexities, while machines excel in rapid adaptation, data processing, and optimization.

The dynamic, adaptive assignment of decision rights, coupled with ongoing monitoring and the technical capacity for human override or redirection, embodies HIC within military and organizational structures. The requisite frameworks incorporate:

  • Multimodal human feedback into learning and planning cycles.
  • Mechanisms for trust calibration and shared situation awareness.
  • Scalable technical architectures supporting HIC under denied, degraded, intermittent, or limited communications (Madison et al., 9 Feb 2024).

4. HIC in AI System Oversight and Technical Services

In AI-enabled technical service and decision-support systems, HIC is realized by requiring mandatory human review and explicit approval of all AI-generated recommendations or actions prior to enactment (Wulf et al., 18 Jul 2025, Kandikatla et al., 10 Oct 2025). This procedural structuring is codified as the "Human-in-Command" model in technical services taxonomies: AI systems gather data, formulate solutions, and propose actions, but no customer-facing response or external action is possible until a human operator has scrutinized and sanctioned the output.

Relative to HITL and HOTL models:

  • HIC mandates human validation as a non-discretionary workflow checkpoint.
  • All risk-bearing, sensitive, or compliance-relevant decisions flow through human gatekeeping by design (see Table 1).
Model AI Autonomy Human Authority Approval Mechanism
HOTL High Supervisory Discretionary
HITL Medium Exception-based Escalation/Optional
HIC Propositional Command Mandatory

Applicability of HIC is determined by task complexity, operational risk, system trustworthiness, cognitive resource availability, and regulatory/ethical imperatives (Wulf et al., 18 Jul 2025, Kandikatla et al., 10 Oct 2025).

5. Interaction Management, Workload Balancing, and Human Agency

Modern HIC-compliant interfaces deploy an interaction manager, tasked with adapting interaction complexity and modality based on the operator's real-time mission workload (0811.0335). Workload is assessed via discrete or continuous models tracking operator interventions and event occurrence, producing input for adaptive interface behavior.

For example:

  • When mission workload is low, the system can place greater demands on the operator for confirmations or clarifications, thus maintaining engagement and situation awareness.
  • Under high or emergency load, the system handles ambiguities or non-understandings with minimal human input, freeing cognitive resources, yet always allowing for intervention.

This adaptive, collaborative interaction—with a focus on “grounding” mutual understanding and preserving robust human agency—addresses risks of overload, deskilling, and loss of control in high-autonomy systems.

In high-consequence domains (autonomous weapon systems, air traffic, finance), HIC is directly mapped to accountability structures. For scenarios where real-time oversight is infeasible (e.g., ultra-fast or protracted operations), legal responsibility and ethical actionability are preserved through advance control directives (ACDs): detailed, contract-like documents established pre-deployment that codify operational parameters, values, responsibilities, and after-action review requirements (Devitt, 2023). This mechanism, inspired by advance care directives in medicine, ensures that even outside real-time supervision, humans remain morally and legally “in command” of autonomous systems.

Critical elements include:

  • Deliberative, multidisciplinary advance control planning.
  • Simulation and scenario modeling covering likely operational regimes.
  • Documentation establishing explicit lines of authority, risk preferences, and legal/ethical baselines.

Challenges remain with scenario unpredictability, necessity of rigorous advance deliberation, and clear, auditable lines of responsibility, particularly as autonomy escalates and the boundary between AI suggestion and action becomes more ambiguous.

7. Implementation, Practical Applications, and Comparative Metrics

HIC's technical realization varies by application:

  • In robotics and cyber-physical systems, mixed-initiative control architectures (e.g., EMICS) enable variable autonomy and seamless handover while ensuring that human operators always retain override capacity (Chiou et al., 2019).
  • In mission-critical interfaces, adaptive and collaborative management of both mission and interaction workload is central (0811.0335).
  • In AI-driven technical services, interface workflows must enforce hard approval boundaries, not merely opportunity for intervention (Wulf et al., 18 Jul 2025).
  • In automated medical devices (e.g., insulin pumps), formal models (e.g., HIL-HIP, CLBFs) extend HIC to include nuanced, continuous human influence within the plant/process and provide mathematically verifiable safety certificates (Banerjee et al., 22 Aug 2024).

Empirical results from controlled trials establish that HIC models deliver:

References Table: Foundational HIC Literature

Domain Key Reference Core HIC Mechanism
UAV Swarms (0811.0335) OODA-based authority allocation
C2/Warfare (Kott et al., 2017, Madison et al., 9 Feb 2024) Complementary C2, SIML frameworks
AI Governance (Kandikatla et al., 10 Oct 2025, Wulf et al., 18 Jul 2025) Mandatory human gatekeeping
Weapon Systems (Devitt, 2023) Advance Control Directives
Robotics (Chiou et al., 2019) Mixed-initiative override
Autonomous Medicine (Banerjee et al., 22 Aug 2024) HIL-HIP, formal safety analysis

Summary

Human-in-Command provides the formal principle and operational structures ensuring that, even as automation and AI advance, humans retain ultimate authority, oversight, and capacity for intervention across system outputs, functions, and time-scales. HIC architectures systematically embed procedural, technical, and organizational mechanisms—mode selection, adaptive workload management, legal/ethical constructs, and safety certificates—to guarantee agency, accountability, and the irreducible role of the human as commander, especially in the management of risk, ambiguity, and ethical consequence.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Human-in-Command (HIC).