Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated Content (2309.02524v1)

Published 5 Sep 2023 in cs.HC and cs.AI

Abstract: This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by LLMs, like the GPT LLM family that powers ChatGPT, in different user interface versions. Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility. While participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content, they rate AI-generated content as being clearer and more engaging. The findings from this study serve as a call for a more discerning approach to evaluating information sources, encouraging users to exercise caution and critical thinking when engaging with content generated by AI systems.

Perceived Credibility of Human and AI-Generated Content: An Analysis of User Trust in ChatGPT

The examined paper offers a structured analysis of the comparative perceived credibility of content produced by human authors versus AI-generated outputs, with specific emphasis on LLMs, such as those underwriting ChatGPT. This research is critical in light of the consistent rise in AI applications for information generation and dissemination, highlighting significant considerations for both user awareness and the inherent biases of these systems.

Study Methodology and Setup

The authors adopted a comprehensive approach involving 606 participants, engaging them with texts presented in three distinct user interface (UI) versions: ChatGPT UI, Raw Text UI, and Wikipedia UI. The content was either generated by humans or AI (specifically by ChatGPT-generated text). Participants evaluated the credibility based on key factors such as competence, trustworthiness, clarity, and engagement related to the content they encountered.

Key Observations

UI Conditions and Credibility Perception: One of the central findings was that user interface variations did not have a significant impact on the perception of content credibility. Participants conferred similar levels of competence and trustworthiness to the content, irrespective of the UI rendering.

Comparison of Content Origin: Notably, while content origin did not majorly shift perceptions of competence and trustworthiness, AI-generated content was consistently perceived as clearer and more engaging than its human-generated counterparts. Such differences in perception highlight an advantage for AI-generated content in engaging users, albeit posing risks due to potential misinformation given the known inaccuracies of AI outputs.

Implications of Findings

The revelations from this paper necessitate a cautious approach to interpreting AI-generated content. Although the perceived professionalism (clarity and engagement) of AI-generated texts may captivate users, the perception of equivalent expertise and reliability between AI and human content raises concerns. Such perceptions overlook the fallibility and potential hallucinations associated with AI systems, particularly due to their reliance on extensive, yet not always reliable, datasets.

In practical terms, this paper underscores the necessity for rigorous discernment and critical evaluation by consumers of AI-generated content. The finding that AI-generated content is perceived as more engaging entrains additional responsibilities for developers to ensure the mitigation of misinformation risk. Furthermore, as the ubiquity of such technologies expands, there is a pressing need for educational strategies that enhance public understanding of AI's inherent limitations, promoting informed consumption and safeguarding against misinformation.

Future Prospects and Research Directions

The continuous evolution of LLMs demands ongoing scrutiny into their broader societal impacts and user perception, particularly in relation to the long-term effects of AI credibility over time—a topic warranting further longitudinal investigation. Exploring diverse content types and extending demographic diversity in research can offer more comprehensive insights into these dynamics.

Ultimately, this paper functions as an essential contribution adding to the foundational understanding required for navigating the rapidly transforming landscape of AI in content generation. It punctuates the need for mindful interaction between humans and machines, fostering an environment where AI serves as an augmentative force rather than a source of ambiguity and misinformation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Martin Huschens (1 paper)
  2. Martin Briesch (10 papers)
  3. Dominik Sobania (15 papers)
  4. Franz Rothlauf (17 papers)
Citations (6)
Youtube Logo Streamline Icon: https://streamlinehq.com