The impact of labeling automotive AI as "trustworthy" or "reliable" on user evaluation and technology acceptance (2408.10905v1)
Abstract: This study explores whether labeling AI as "trustworthy" or "reliable" influences user perceptions and acceptance of automotive AI technologies. Using a one-way between-subjects design, the research involved 478 online participants who were presented with guidelines for either trustworthy or reliable AI. Participants then evaluated three vignette scenarios and completed a modified version of the Technology Acceptance Model, which included variables such as perceived ease of use, human-like trust, and overall attitude. Although labeling AI as "trustworthy" did not significantly influence judgments on specific scenarios, it increased perceived ease of use and human-like trust, particularly benevolence. This suggests a positive impact on usability and an anthropomorphic effect on user perceptions. The study provides insights into how specific labels can influence attitudes toward AI technology.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.