User Acceptance and Concerns for LLM-Powered Conversational Agents in XR

Determine how users’ technology acceptance and concerns are shaped when large language model-powered conversational agents are embedded into extended reality systems and devices, including mixed reality and virtual reality head-mounted displays, in order to understand adoption dynamics and apprehensions specific to this integration.

Background

The paper reviews extensive prior work on user attitudes toward head-mounted displays and conversational agents separately, noting that XR devices collect rich sensor data and that LLM-powered agents enable naturalistic conversations that may prompt sensitive disclosures. This convergence raises distinct privacy, security, social, and trust considerations.

Despite these developments, the authors highlight a gap: it has not been established how users’ acceptance and concerns are specifically shaped when generative-AI/LLM conversational agents are embedded within XR settings and devices. The study in this paper offers baseline evidence via a large-scale crowdsourcing survey, underscoring the need to systematically characterize these perceptions in the XR-LLM context.

References

However, despite extensive research on HMDs for MR, AR, and VR, it remains an open question how users' technology acceptance and concerns are shaped, especially when novel conversational AI agents, facilitated by generative AI and LLMs, are embedded into XR settings and devices.

Exploring User Acceptance and Concerns toward LLM-powered Conversational Agents in Immersive Extended Reality  (2512.15343 - Bozkir et al., 17 Dec 2025) in Section 2.2 (Users' Perspectives on Conversational Agents and XR)