Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Co-Writing with Opinionated Language Models Affects Users' Views (2302.00560v1)

Published 1 Feb 2023 in cs.HC, cs.AI, and cs.CL

Abstract: If LLMs like GPT-3 preferably produce a particular point of view, they may influence people's opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write - and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated LLM affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully.

Citations (164)

Summary

  • The paper empirically demonstrates that co-writing with a large language model opinionated toward a specific viewpoint can significantly shift users' expressed opinions in their writing and influence their subsequent personal attitudes.
  • The study introduces "latent persuasion" to describe how AI language technologies subtly influence user opinions, suggesting this occurs through potential informational or normative influence during interaction.
  • Findings highlight ethical concerns regarding AI's persuasive power, emphasizing the need for developers and policymakers to address manipulation risks and advance methods for auditing and aligning model content ethically.

Co-Writing with Opinionated LLMs Affects Users' Views

The paper "Co-Writing with Opinionated LLMs Affects Users' Views" conducts an empirical investigation into the influence of LLMs on individuals’ opinions, specifically when integrated into a co-writing tool. This paper involved 1,506 participants who were assigned the task of composing a discussion post about the societal impact of social media, using either an unassisted method or with assistance from GPT-3 configured to favor particular opinions. Subsequent to completing the writing task, participants were surveyed on their personal attitude toward social media.

Key Findings

The paper presented definitive evidence that interaction with an opinionated LLM notably impacts the opinions reflected in users' written content. Participants exposed to a model generating arguments that social media benefits society were significantly more likely to incorporate favorable sentiments in their posts compared to those in the control group. Conversely, participants who interacted with a model designed to highlight negative aspects of social media reflected similar biases in their writing.

Furthermore, the experiment demonstrated that these interactions could influence participants’ subsequent attitudes, as revealed in the post-task surveys. Those participants aligned with the optimistic model’s suggestions expressed more positive views about social media, while those exposed to the pessimistic model exhibited an increase in negative sentiment.

Implications and Theoretical Considerations

The findings underscore the critical need to scrutinize the latent persuasive power inherent in AI language technologies. In this context, "latent persuasion" is introduced to describe the subtle influence LLMs can exert on users’ opinions during everyday communication and decision-making processes. This persuasive impact is not merely a function of convenience but intertwines with the cognitive processes underlying opinion formation.

The paper's insights suggest the influence mechanisms through several conceptual frameworks. Informational influence may arise when the model provides compelling arguments or novel information, consequently swaying the users' stance. Alternatively, normative influence could occur if users perceive the model's suggestions to carry some authoritative validity or perceived expertise. Both potential mechanisms highlight the unique interaction dynamics between users and AI, warranting further exploration.

This research also adds to existing discussions on ethical considerations surrounding AI deployment. With the ease of manipulating model biases, developers and policymakers must address these techniques' ethical and societal implications.

Future Directions

Future research should delve into the long-term impacts of prolonged interactions with opinionated AI systems across diverse communication contexts and topics. Investigating the variability in opinion influence due to model configurations within real-world applications remains crucial to understanding the full spectrum of implications. Additionally, advancing methods for auditing and aligning model-generated content with ethical standards is an urgent need to prevent misuse and promote fair AI-mediated communications.

In conclusion, the interaction between humans and LLM co-writing tools poses significant questions for computational persuasion, opinion formation, and the broader field of human-computer interactions. It is paramount to continue this line of inquiry to ensure responsible and balanced integration of AI technologies across various platforms and societal applications.

X Twitter Logo Streamline Icon: https://streamlinehq.com