Human–AI Trust Attitude Scale (HATAS)
- Human–AI Trust Attitude Scale (HATAS) is a psychometric instrument designed to quantify multidimensional trust attitudes towards AI systems.
- The Likert-based variant operationalizes eight trust factors with robust reliability (α = 0.95) and confirmed structural validity via CFA and related tests.
- The semantic differential variant distinguishes cognitive and affective trust, providing fine-grained insights with near-perfect internal consistency.
The Human–AI Trust Attitude Scale (HATAS) refers to rigorously validated psychometric instruments for quantifying human trust attitudes toward AI systems. Multiple independently developed instruments bearing the HATAS designation exist; two distinct variants, a Likert-type multidimensional scale (Larasati, 24 Oct 2025) and a semantic differential scale for cognitive and affective trust (Shang et al., 2024), have been developed and validated to operationalize trust constructs in empirical research on human–AI interaction.
1. Theoretical Foundations and Conceptual Structure
Trust in human–AI interaction research is conceptualized as an attitude reflecting “willingness to be vulnerable” to AI systems, explicitly distinct from observed behavioral manifestations such as reliance or compliance (Larasati, 24 Oct 2025). This attitude is understood as a latent, multidimensional construct encompassing both cognitive and affective dimensions.
The Likert-based HATAS (Larasati, 24 Oct 2025) builds upon established trust models in organizational behavior and human-computer interaction (e.g., Mayer et al., 1995; Lee & See, 2004) and decomposes trust attitude into multiple factors: perceived technical competence, reliability, understandability, helpfulness, faith, personal attachment, user autonomy, and institutional credibility.
In contrast, the semantic-differential HATAS (Shang et al., 2024) is grounded in the interpersonal trust literature (e.g., McAllister, 1995) and models trust as a bivariate structure:
- Cognitive trust (reasoned judgement of an AI agent’s ability, reliability, predictability, integrity, and transparency)
- Affective trust (emotional response, including benevolence, empathy, and likability)
These conceptualizations are empirically distinguished as correlated but separable latent factors.
2. Scale Development and Psychometric Validation
2.1 Likert-based HATAS (Larasati, 24 Oct 2025)
Scale development proceeded through six stages:
- Item Generation: Deductive literature review identified six core trust factors. An inductive vignette and focus group study in a fictional AI-based breast cancer detection scenario elicited two additional factors (user autonomy, institutional credibility), resulting in eight dimensions.
- Expert Validation: Seven experts rated 40 items; items with Content Validity Index (CVI) < 0.70 or κ < 0.40 were removed. The two items per factor with highest CVI/κ scores were retained, yielding a 16-item, eight-factor scale.
- Cognitive Interviews: Think-aloud protocols (n = 9) refined item wording.
- Survey Administration: Main survey (n = 300, Amazon mTurk Masters) administered after brief AI-in-healthcare demonstrations. Responses used 7-point Likert scaling.
- Dimensionality Testing: Kaiser–Meyer–Olkin (KMO = 0.946), Bartlett’s test (p < .001), and confirmatory factor analysis (CFA) supported the eight-factor structure (CFI = 0.987; TLI = 0.979; RMSEA = 0.046; SRMR = 0.023). Item-factor loadings ranged from 0.57 to 0.84.
- Reliability and Validity: Cronbach’s α per subscale: 0.72–0.91 (overall α = 0.95). McDonald’s ω values consistent. Test–retest ICC = 0.74 (n = 304). Convergent validity supported (CR > 0.70, AVE > 0.50). Fornell–Larcker and HTMT ratios < 0.90 confirmed discriminant validity. Concurrent and predictive validity analyses were also performed.
2.2 Semantic Differential HATAS (Shang et al., 2024)
Development followed:
- Item Pool Generation: Literature review yielded 56 candidate bipolar adjective pairs.
- Factor Analysis: Exploratory factor analysis (EFA, n = 151, KMO = 0.98) and parallel analysis reduced the pool to 27 items: cognitive (18 items), affective (9 items). Oblique rotation produced correlated latent factors (r = 0.78).
- Confirmatory Factor Analysis: Diagonally weighted least squares estimation in a follow-up study (N = 44): χ²(494) = 250.94, p = 1.000, CFI = 1.000, TLI = 1.003, RMSEA = 0.000, SRMR = 0.038.
- Reliability: Cronbach’s α: .98 and .96 for cognitive and affective subscales, respectively.
- Validity: Convergent (cognitive trust: r = .881; affective: r = .253 with general trust), discriminant (no overlap with moral trust construct), and criterion validity established.
3. Scale Structure and Scoring Protocols
3.1 Likert-based HATAS Structure
- Eight Dimensions: Technical competence, reliability, understandability, helpfulness, faith, personal attachment, user autonomy, institutional credibility.
- Items: Each factor measured by two statements (randomized order) on a 7-point Likert scale (1 = Strongly disagree, 7 = Strongly agree).
- Scoring: Subscale score = mean/sum of two items in each dimension. Total score = mean/sum of all 16 items; higher values reflect greater trust attitude.
- Example Items: “The AI system uses appropriate methods to get results based on the information I input.” (Technical Competence)
3.2 Semantic Differential HATAS Structure
- Factors: Cognitive trust (e.g., unreliable–reliable, incomprehensible–understandable), affective trust (e.g., apathetic–empathetic, rude–cordial).
- Response Format: 5-point scale with anchors coded from –2 (negative) to +2 (positive).
- Scoring: Compute mean (or sum) for cognitive (18 items), affective (9 items), and overall (27 items). Means > 0 indicate positive trust; < 0 denote distrust.
- Interpretation Cutoffs: Mild trust (0.5–1.0), strong trust (>1.0), mild distrust (–0.5–0.0), strong distrust (<–1.0).
- Format Flexibility: 7-point scaling variant supported.
Table: Comparison of HATAS Variants
| HATAS Variant | Dimensions / Structure | Item Format |
|---|---|---|
| Likert-based (Larasati, 24 Oct 2025) | 8 (2 items each) | 16 statements, 7-point Likert |
| Semantic Differential (Shang et al., 2024) | 2 (Cognitive: 18, Affective: 9) | 27 bipolar pairs, 5-point SD |
4. Administration Procedures
Administration of both scales includes:
- Randomized item/pair order to control for response bias.
- Brief context-setting vignette or demonstration of the target AI system recommended.
- Estimated completion time: 5–7 minutes (Likert scale); similar for semantic differential.
Sample size recommendations to ensure validity: Likert-based—n ≈ 300 for CFA, semantic differential—n ≥ 150 (EFA), n ≥ 200 (CFA).
5. Domain Applicability and Adaptation
The Likert-based HATAS was developed and initially validated in the context of AI-based medical decision support systems (notably cancer and health-risk prediction). Demonstrations included videos of FDA-approved or CE-marked applications. Adaptation to other domains entails:
- Reframing medical-specific terminology for the target context.
- Cognitive interviews and pilot testing with intended user populations.
- Reassessment of psychometric indices (content validity, CFA, reliability, discriminant/convergent validity) and criterion linkage with domain outcomes such as acceptance and reliance.
The semantic-differential HATAS is designed for general applicability in diverse AI agent contexts, including conversational and assistance scenarios. Scenario-based vignettes or real-world AI exposures may be embedded. Scale is suitable for cross-system or cross-condition comparative research. Researchers may benchmark against their sample norms.
6. Empirical and Interpretive Guidance
- Likert-based HATAS: Each dimension demonstrates high reliability (α = 0.72–0.91; overall α = 0.95) and evidence for precise discriminants among trust-related constructs. Factor intercorrelations highlight potential domain-specific patterns, suggesting further theoretical analysis for highly overlapping constructs (e.g., reliability and helpfulness).
- Semantic-differential HATAS: Factor loadings consistently exceed 0.55, with low cross-loadings, and both subscales demonstrate near-perfect internal consistency (α = .98/.96). Cognitive trust exhibits stronger convergent association with general trust, affective trust reliably contributes to holistic trust attitudes in scenarios highlighting agent warmth or empathetic behaviors. Differences of 0.5 on subscales are considered substantively meaningful.
- Both scales are psychometrically validated using state-of-the-art procedures (CFA, composite reliability, HTMT, Fornell–Larcker, etc.), supporting rigorous longitudinal or comparative studies.
7. Contextual Significance and Related Measures
The emergence of HATAS addresses a critical gap in the empirical study of human trust toward AI by providing standardized, validated metrics that disambiguate cognitive, affective, and institutional facets of trust. This enables systematic investigation of trust’s determinants, its modulation by system attributes (e.g., accuracy, transparency), and its effects on user acceptance, reliance, and safety-critical deployment decisions.
The semantic-differential HATAS complements prior work on uni-dimensional and cognitive-only trust scales by capturing the dual-route structure of affective and cognitive bases of trust, a distinction of increasing relevance as AI systems adopt more anthropomorphic and socially interactive roles (Shang et al., 2024).
The multidimensional Likert-based HATAS situates trust in the ecology of human–AI medical decision-making, supporting both basic research and translational deployment assessments. Both instruments set methodological standards for future trust measurement—and enable meta-analytic aggregation across studies spanning application domains (Larasati, 24 Oct 2025).
Key References:
- "Human and AI Trust: Trust Attitude Measurement Instrument" (Larasati, 24 Oct 2025)
- "Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust" (Shang et al., 2024)