Papers
Topics
Authors
Recent
Search
2000 character limit reached

Human Attribute Taxonomy for Hybrid Systems

Updated 17 November 2025
  • Human-Attribute Taxonomy is a framework that defines 12 distinct axes differentiating human and ML decision-making processes.
  • It categorizes differences into task, input, internal processing, and output stages, enabling a structured mathematical aggregation of decisions.
  • The taxonomy informs hybrid system design by leveraging agent complementarity to achieve joint performance that outperforms standalone approaches.

A human-attribute taxonomy systematically characterizes the distinct axes along which human and ML agents differ in decision-making systems, providing a conceptual foundation for understanding and engineering human–ML complementarity in @@@@1@@@@ decision contexts. This taxonomy enumerates 12 attributes, grouped along four computational-level stages—task definition, input, internal processing, and output. Each attribute formalizes a specific source of discrepancy between human and ML agents, together forming the basis for mathematical analysis of joint performance and the conditions under which hybrid systems can surpass either agent alone (Rastogi et al., 2022).

1. Taxonomy of Human vs. ML Decision-Making Attributes

The taxonomy comprises 12 attributes, each precisely defined, rationalized, and notated, indicating how humans and ML agents diverge in operational mechanisms and decision heuristics.

Attribute Notation Example Context
Objective fhfmf_h \neq f_m Human loan officer vs. ML risk minimization
Misaligned Construct of Interest yh(x,a)ym(x,a)y_h(x,a) \neq y_m(x,a) Medical necessity vs. historic spending
Access to Different Information XhXm\mathcal{X}_h \neq \mathcal{X}_m Judge’s perception vs. ML coded data
Nature of Past Experiences DhDm\mathcal{D}_h \neq \mathcal{D}_m Physician’s clinical history vs. ML dataset
Models of the World ΠhΠm\Pi_h \neq \Pi_m Human rules-of-thumb vs. NN feature maps
Input Processing and Perception φh(x)φm(x)\varphi_h(x) \neq \varphi_m(x) Probability distortion, causal/statistical bias
Choosing Among Models (Optimization) OPThOPTm\text{OPT}_h \neq \text{OPT}_m Human heuristics vs. ML global optimization
Available Actions AhAm\mathcal{A}_h \neq \mathcal{A}_m Human flexibility vs. ML prescription
Explaining the Decision Social narrative vs. model saliency map
Uncertainty Communication p^\hat{p} (ML), chc_h (human) Probabilistic output vs. discrete confidence
Output Consistency P[πh(x)πh(x)]>0P[\pi_h(x) \neq \pi_h(x)']>0 Human noise vs. ML determinism
Time Efficiency ThTmT_h \gg T_m ML high throughput vs. human limits

Task Definition differences comprise objective misalignment and divergence between constructs of interest, reflecting that ML agents usually optimize explicit, mathematically tractable objectives, whereas humans institute broader, context-sensitive evaluative functions (fhf_h vs. fmf_m). ML models typically learn to optimize a proxy ymy_m, while human agents reason about a latent, true construct yhy_h.

Input Differences pertain to disparities in observed feature spaces (Xh,Xm\mathcal{X}_h, \mathcal{X}_m) and the nature or breadth of prior experience (Dh,Dm\mathcal{D}_h, \mathcal{D}_m), revealing that lived experience and unstructured observation contribute to divergences in judgment.

Internal Processing Divergences are articulated through policy classes (Πh,Πm\Pi_h, \Pi_m), perception mappings (φh(x),φm(x)\varphi_h(x), \varphi_m(x)), and the adopted optimization strategies (human heuristics OPTh_h versus ML global search OPTm_m). These highlight differences in causal versus statistical reasoning, bounded rationality, and algorithmic constraints.

Output Heterogeneity encompasses not only the actions space (Ah,Am\mathcal{A}_h, \mathcal{A}_m) but also differences in explanation modalities, uncertainty communication, temporal consistency, and efficiency metrics.

2. Mathematical Aggregation Framework Formalizing Complementarity

A convex combination framework formalizes hybrid human–ML decision-making, enabling precise analysis of when and how agent complementarity emerges. Each agent supplies a local policy, evaluated on a global feature space X\mathcal{X}, with possible decisions in action space A\mathcal{A}.

Given nn decision instances xiXx_i \in \mathcal{X}, the aggregation constructs a weighted policy:

π(xi)=whiπh(xi)+wmiπm(xi)withwhi,wmi[0,1],whi+wmi=1(1)\pi(x_i) = w_h^i \cdot \pi_h(x_i) + w_m^i \cdot \pi_m(x_i) \quad \text{with} \quad w_h^i, w_m^i \in [0,1], \quad w_h^i + w_m^i = 1 \quad (1)

  • whiw_h^i and wmiw_m^i are per-instance mixing weights.

Optimal aggregation chooses the weight vector to maximize the target score:

πargmaxπf(π)(2)\pi^* \in \arg\max_{\pi} f(\pi) \quad (2)

where f()f(\cdot) is the evaluation function.

A system exhibits complementarity if:

f(π)>max{f(πh),f(πm)}(3)f(\pi^*) > \max \{ f(\pi_h), f(\pi_m) \} \quad (3)

This quantifies joint-system value beyond the best stand-alone agent.

Collaboration mechanisms are further distinguished:

  • Across-instance complementarity (AC):

AC(w)=1ni=1n(wmiwm)2=1ni=1n(whiwh)2(4)AC(w) = \frac{1}{n}\sum_{i=1}^n (w_m^i - \overline{w}_m)^2 = \frac{1}{n}\sum_{i=1}^n (w_h^i - \overline{w}_h)^2 \quad (4)

where wm=(1/n)wmi\overline{w}_m = (1/n)\sum w_m^i. AC measures routing/deferral frequency.

  • Within-instance complementarity (WC):

WC(w)=11ni=1n(whiwmi)2(5)WC(w) = 1 - \frac{1}{n} \sum_{i=1}^n (w_h^i - w_m^i)^2 \quad (5)

WC is maximal for full blending (whi=wmi=0.5w_h^i = w_m^i = 0.5) across all ii, and zero for pure routing.

3. Mechanisms Underlying Human–ML Complementarity

Complementarity arises specifically when humans and ML agents differ along one or more taxonomy attributes. The key mechanisms identified include:

  • Input Asymmetry (XhXm\mathcal{X}_h \neq \mathcal{X}_m): Each agent may be strong on distinct features, favoring local averaging (high WCWC) when features are complementary, or pure deferral (ACAC) when a clear strength dominates per instance.
  • Objective Misalignment (fhfmf_h \neq f_m): Aggregation should route occasions to the agent best aligned with the target evaluation function ff, especially when each agent’s training involved a different cost function or label proxy.
  • Model Class Discrepancy (ΠhΠm\Pi_h \neq \Pi_m): Human mental models may generalize out-of-distribution where ML models excel at exploiting high-volume statistical structure; aggregation can blend their generalization domains.
  • Inconsistency/Noise: Human decisions are susceptible to intra-individual variability (“noise”), whereas ML agents tend toward deterministic output. The aggregation weights can be adapted dynamically to prioritize ML when human inconsistency is detected.
  • Time Constraints: Trivial or routine decisions can be assigned to ML for efficiency, whereas exceptions or high-uncertainty cases are deferred to human agents.

Each mechanism can be explicitly mapped to the manner in which aggregation weights (whi)(w_h^i) are learned and deployed over instances, with performance gains explained by the specific diversity of strengths engaged.

4. Empirical and Theoretical Conditions for Gaining Complementarity

Synthetic simulations in the referenced work and general theoretical analysis confirm that the prerequisite for sustainable complementarity is persistent, exploitable difference along taxonomy axes. Hybrid systems that simply mirror one another or are identical in policy class, training data, or objectives, can at best match, but not exceed, standalone performance.

Critical conditions include:

  • Sufficient input diversity (distinct information or lived experience).
  • Persistent optimization and model class differences (causal vs. statistical, heuristic vs. global optimality).
  • Non-negligible discrepancy in action space, explanation, and uncertainty signaling.
  • Task/objective misalignment between agents.

These conditions enable aggregation policies to leverage both deferral and blending, aligning the chosen agent or combination to the local structure and risk of each decision.

5. Implications for Hybrid System Design and Future Research

The taxonomy provides a principled language and analytic foundation to reason about the sources of human–ML complementarity, supporting both empirical inquiry and system design. Its explicit mapping between attribute differences and aggregation mechanisms allows designers to predict—not merely hope for—complementarity by probing which differences are likely to be relevant for the target domain.

A plausible implication is that attempts to maximize complementarity should focus on cultivating and preserving such differences rather than minimizing human–ML discrepancies. Redundant overlap in agent knowledge or identical optimization strategies can reduce hybrid gains; by contrast, heterogeneity along the taxonomy’s axes offers the raw material for outperforming stand-alone systems.

6. Synthesis: Role of the Human-Attribute Taxonomy in Complementarity

In summary, the taxonomy of human versus ML attributes provides a rigorous scaffold for analyzing, engineering, and optimizing hybrid decision-making systems. Each attribute—whether it pertains to objectives, information, model class, action space, or output modalities—serves as both a diagnostic lens and a design lever. The convex-combination aggregation framework mathematically formalizes how these diverse strengths can be strategically combined to yield true complementarity, as determined by strict improvement over either agent (Eq. 3). Together, these scientific tools advance the understanding and deployment of hybrid human–ML systems across consequential domains (Rastogi et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Human-Attribute Taxonomy.