Papers
Topics
Authors
Recent
Search
2000 character limit reached

Technology Acceptance Model (TAM)

Updated 3 March 2026
  • TAM is a theoretical framework that defines user acceptance through constructs like perceived usefulness and ease-of-use, widely applied in digital health, IoT, education, and more.
  • It operationalizes acceptance via standardized survey instruments and structural equation modeling, with empirical validation using metrics such as t-tests and p-values.
  • Extensions of TAM incorporate additional factors like trust, social influence, and perceived risk, providing actionable insights for practical and research applications.

The Technology Acceptance Model (TAM) is a foundational theoretical framework for explaining and predicting user acceptance of information and communication technologies. Initially formulated by Davis in 1985, TAM has been the basis for extensive empirical and conceptual research across domains—including digital health, IoT, education, AI, and organizational innovation. The model defines user acceptance as a function of critical beliefs about technology and structures these relationships in rigorous measurement and structural models.

1. Core Constructs and Theoretical Structure

The original form of TAM specifies four principal constructs:

  • Perceived Usefulness (PU): The degree to which a user believes that using a specific system will enhance job or task performance. In the IAQ context, PU adapts to "the degree to which a person believes that using the system will enhance his or her IAQ awareness/management."
  • Perceived Ease of Use (PEU or PEOU): The degree to which a user believes that system use will be free of effort.
  • Attitude Toward Use (AT): The user's overall affective response to the use of the system.
  • Behavioral Intention to Use (BI): The user's expressed likelihood of using the technology in the future.

The canonical relationships, commonly specified in structural equation models (SEM), are:

PU=γ1PEOU+ε1 AT=α1PU+α2PEOU+ε2 BI=β1AT+ε3\begin{aligned} PU &= \gamma_1 \cdot PEOU + \varepsilon_1 \ AT &= \alpha_1 \cdot PU + \alpha_2 \cdot PEOU + \varepsilon_2 \ BI &= \beta_1 \cdot AT + \varepsilon_3 \end{aligned}

In many TAM applications, direct paths from PU and PEOU to BI are modeled, notably as:

BI=β2PU+β3PEOU+ε5BI = \beta_2 \cdot PU + \beta_3 \cdot PEOU + \varepsilon_5

where behavioral intention may serve as a proxy for actual usage, and the model may be extended by domain-specific exogenous variables.

2. Measurement and Operationalization

Empirical studies operationalize TAM through standardized survey instruments, with each construct measured by multiple Likert-scale items. For instance, in the IoT-driven IAQ platform study (Kureshi et al., 2024), the following items were used:

  • PU: "How useful do you find the indoor air quality data and information provided on our platform?" and related items on IAQ suggestions and data accuracy.
  • PEU: "How easy was it to navigate the indoor air quality platform?" including assessments of platform navigation and activity tracking.
  • AT: "How positively or negatively do you feel about using our indoor air quality platform regularly in the future?"
  • BI: "How likely are you to use our indoor air quality platform again tomorrow to improve indoor air based on the information available?"

Standard practice employs 5- or 7-point agreement scales, though reporting varies. Measurement validation should involve confirmatory factor analysis (CFA), internal consistency (e.g., Cronbach’s alpha), composite reliability, and average variance extracted (AVE), all of which are recommended but were not implemented in (Kureshi et al., 2024).

3. Empirical Validation and Dynamics

In application to the IAQ platform, the study utilized a repeated-measures design to assess TAM constructs over a period of platform exposure. Statistically significant improvements were observed for all TAM dimensions via paired t-tests:

Construct Week 2 vs Week 3 t-statistic p-value
PU –4.90 0.00037
PEU –2.83 0.0152
AT –6.97 0.0000149
BI –3.24 0.00708

These results, while based on a small sample and lacking psychometric validation, demonstrate that repeated use of the platform increased perceived usefulness/ease of use, attitude, and intention to use.

4. Extensions, Contextualization, and Integration

TAM is frequently extended to integrate contextual, psychological, or holistic behavior-change constructs. In (Kureshi et al., 2024), the design of interventions was informed by the COM-B model (Capability, Opportunity, Motivation → Behavior), though COM-B was not itself a TAM construct. The platform incorporated digital health interventions, IAQ sensing, and citizen science, but the TAM model remained unmodified in measurement.

Key extensions recommended by experts in technology acceptance include the addition of constructs such as trust, perceived risk, social influence, and domain-specific behavioral drivers. These can be operationalized as exogenous variables directly affecting PU, PEU, or BI. For robust model assessment, contemporary best practices advocate for simultaneous reporting of SEM fit statistics (e.g., CFI, TLI, RMSEA) and quantitative and qualitative integration.

5. Theoretical and Applied Implications

The sustained relevance of TAM, as demonstrated in (Kureshi et al., 2024), rests on its predictive validity and adaptability across technological and intervention contexts:

  • Theoretical: Even short-term digital interventions can produce statistically significant shifts in PU, PEU, AT, and BI. The use of longitudinal or repeated-measures designs reveals crucial temporal dynamics in acceptance that static cross-sectional studies may overlook. Embedding behavioral science frameworks (e.g., COM-B) into design leverages TAM’s strengths but also highlights its limitations if not formally integrated.
  • Practice: Rapid improvements in acceptance dimensions can be achieved via transparent data visualization, actionable guidance, and daily activity logging. Platforms can capitalize on social diffusion (information sharing among users' networks) and behavioral change (e.g., ventilatory habits) as secondary benefits, though these are not formal TAM variables.

Key recommendations for future research in TAM include:

  • Employing comprehensive SEM approaches, including reporting of reliability, validity, path coefficients, explained variance (R²), and model-fit indices.
  • Explicitly defining measurement scales and survey item wordings.
  • Expanding model scope to include additional predictors of acceptance salient in IoT and health contexts.
  • Integrating qualitative insights with quantitative modeling to capture the full complexity of user acceptance in real-world deployments.
Construct Item Description
PU Usefulness of data, helpfulness of suggestions, helpfulness of activity tracking, perceived data accuracy
PEU Ease of navigation, ease of activity tracking, reported difficulties
AT Positive/negative feelings about future use
BI Likelihood of use in the near future to improve IAQ

Enhanced transparency in measurement, coupled with dynamic, intervention-responsive platform features, underpins optimal TAM-based adoption of digital health tools. By integrating rigorous measurement of core TAM constructs with behavioral and context-specific factors, research can move toward both greater explanatory power and practical impact in technology uptake (Kureshi et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Technology Acceptance Model (TAM).