- The paper demonstrates that trust indirectly influences AI acceptance by shaping perceived usefulness and fostering positive user attitudes.
- The paper identifies two key trust dimensions, highlighting that functionality trust has a stronger impact on usage intention than human-like trust.
- The paper validates the Technology Acceptance Model for AI technologies, underscoring the need for user-friendly, reliable, and transparent designs.
Trust in AI and Its Role in the Acceptance of AI Technologies
The paper by Choung, David, and Ross investigates the critical role of trust in the acceptance and use of AI technologies. As AI systems permeate various sectors, an intricate understanding of trust dynamics is crucial. This paper uses the Technology Acceptance Model (TAM) as its theoretical framework to explore how trust mediates the acceptance of AI-driven technologies.
The research is conducted through two studies, both aiming to validate the significance of trust and its dimensions within the TAM framework. Study 1 examines college students' interactions with AI voice assistants using survey data, confirming that trust significantly affects the intention to use, mediated by perceived usefulness (PU) and attitude. Study 2 extends these findings to a representative U.S. sample and explores trust as a multidimensional construct, identifying two primary dimensions: human-like trust and functionality trust.
Key Findings
- Trust's Indirect Influence: Across both studies, trust indirectly affects the intention to use AI technologies, primarily through PU and attitude. This robust indirect influence highlights trust's pivotal role in shaping technology acceptance by enhancing perceptions of usefulness and fostering positive attitudes.
- Two Dimensions of Trust: The exploratory factor analysis in Study 2 delineates trust into two dimensions—human-like trust and functionality trust. Functionality trust, encompassing competence and reliability, exerts a more considerable impact on usage intentions across studied AI technologies. In contrast, human-like trust, related to perceived benevolence and integrity, provides additional insights into emotional and social aspects of AI interaction.
- Application of TAM to AI Technologies: The TAM framework, typically applied to various technologies, proves effective in AI contexts. The constructs of perceived ease of use (PEU) and PU remain significant predictors of technology acceptance, with PEU consistently demonstrating a strong total effect on usage intention.
Implications and Future Directions
The paper contributes significantly to the theoretical extension of the TAM by integrating trust as a multidimensional construct, demonstrating its centrality in AI technology acceptance. From a practical perspective, the findings emphasize designing AI systems that are user-friendly, reliable, and transparent to encourage trust and subsequent adoption. The paper underscores the necessity for incorporating anthropomorphic elements in AI technology where appropriate to engender emotional trust, especially in consumer-facing applications.
Future research could extend the multidimensional trust model to high-stakes AI applications, such as autonomous vehicles and healthcare solutions, to explore whether these dimensions hold or if additional factors emerge. Moreover, examining trust dynamics across different stages of technology deployment and in varied sociocultural contexts could provide further granular insights.
In summary, this research elucidates the nuanced role of trust in AI technology acceptance, advocating for continued exploration of trust-oriented design and management strategies in AI system development. By identifying crucial trust factors and validating their influence within the TAM framework, this paper lays the groundwork for advancing both theoretical understanding and practical approaches to fostering trust in AI.