Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simulating Opinion Dynamics with Networks of LLM-based Agents (2311.09618v4)

Published 16 Nov 2023 in physics.soc-ph and cs.CL

Abstract: Accurately simulating human opinion dynamics is crucial for understanding a variety of societal phenomena, including polarization and the spread of misinformation. However, the agent-based models (ABMs) commonly used for such simulations often over-simplify human behavior. We propose a new approach to simulating opinion dynamics based on populations of LLMs. Our findings reveal a strong inherent bias in LLM agents towards producing accurate information, leading simulated agents to consensus in line with scientific reality. This bias limits their utility for understanding resistance to consensus views on issues like climate change. After inducing confirmation bias through prompt engineering, however, we observed opinion fragmentation in line with existing agent-based modeling and opinion dynamics research. These insights highlight the promise and limitations of LLM agents in this domain and suggest a path forward: refining LLMs with real-world discourse to better simulate the evolution of human beliefs.

Simulating opinion dynamics using agent-based models (ABMs) offers insights into phenomena like polarization and misinformation spread. However, traditional ABMs often rely on simplified representations of human cognition and behavior. The paper "Simulating Opinion Dynamics with Networks of LLM-based Agents" (Chuang et al., 2023 ) explores leveraging LLMs as the cognitive engine for agents within these simulations, aiming for richer, more nuanced interactions based on natural language. This approach replaces hand-crafted rules for belief updates with the inferential and generative capabilities of LLMs operating on textual personas and interaction histories.

Framework for LLM-based Opinion Dynamics Simulation

The core methodology involves constructing a network of LLM agents and simulating their interactions over time to observe the evolution of their opinions.

Agent Initialization and Personas

A simulation starts with a network of NN agents, each backed by an LLM (e.g., gpt-3.5-turbo-16k). Each agent aia_i is initialized with a textual persona comprising:

  • Demographics: Name, political leaning, age, gender, ethnicity, education, occupation.
  • Initial Opinion: A statement reflecting their starting stance on the simulation topic (e.g., "Strongly positive opinion about climate change mitigation").

This persona text constitutes the agent's initial memory state, mit=0m_i^{t=0}. This approach allows for encoding complex initial conditions beyond simple numerical values.

Interaction Protocol

The simulation proceeds over TT discrete time steps. In each step tt:

  1. Agent Selection: A pair of agents (aia_i, aja_j) is randomly selected for interaction. This corresponds to a fully connected network topology with random dyadic encounters.
  2. Message Generation: The "speaker" agent aia_i generates a message xitx_i^t (e.g., a tweet) based on its current memory mitm_i^t. This involves prompting the LLM with the agent's persona/memory and asking it to express its current view on the topic.
  3. Message Reception and Belief Update: The "listener" agent aja_j receives the message xitx_i^t. It is prompted with its own memory mjtm_j^t and the received message xitx_i^t, and asked to generate a verbal report rjtr_j^t detailing its reaction and updated belief state.
  4. Opinion Classification: The textual report rjtr_j^t is converted into a numerical opinion score ojto_j^t using an external opinion classifier model, focf_{oc}. The paper employed FLAN-T5-XXL fine-tuned for this task, mapping the verbal report to a discrete scale (e.g., -2 'strongly negative' to +2 'strongly positive'). This step is crucial for quantitative analysis of opinion shifts.
  5. Memory Update: Both agents update their memories (mit+1m_i^{t+1}, mjt+1m_j^{t+1}) based on the interaction. The speaker updates its memory based on having generated xitx_i^t, and the listener updates based on receiving xitx_i^t and generating rjtr_j^t.

Memory Mechanisms

Two memory update strategies were investigated:

  • Cumulative Memory: Appends a log of the most recent interaction (message sent/received, report generated) to the existing memory string. This is simple but can lead to very long contexts exceeding LLM limits.
  • Reflective Memory: Implements a summarization mechanism, akin to techniques used in generative agent simulations. Periodically, the agent's LLM is prompted to reflect on recent experiences and integrate them into a more concise summary, keeping the memory context manageable while retaining key information.

The choice between these impacts computational cost (context length) and potentially the fidelity of long-term memory representation.

Simulation Environment and Control

The simulations were conducted in a "closed-world" setting. Agents were explicitly prompted not to consult external knowledge sources but to base their opinions solely on their initial persona and subsequent interactions within the simulation. This aligns with typical ABM assumptions and aims to isolate the dynamics emerging from agent interactions.

Pseudocode for Interaction Step

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
def run_interaction_step(agents, topic):
    # 1. Select speaker i and listener j randomly
    i, j = random.sample(range(len(agents)), 2)
    speaker = agents[i]
    listener = agents[j]

    # 2. Speaker generates message
    speaker_prompt = f"Persona/Memory:\n{speaker.memory}\n\nInstruction: Express your current opinion on '{topic}' in a short message."
    message_x_i = speaker.LLM.generate(speaker_prompt)

    # 3. Listener processes message and generates report
    listener_prompt = f"Persona/Memory:\n{listener.memory}\n\nReceived Message: '{message_x_i}'\n\nInstruction: Read the message. State your reaction and updated belief about '{topic}'."
    report_r_j = listener.LLM.generate(listener_prompt)

    # 4. Classify listener's opinion
    opinion_o_j = opinion_classifier(report_r_j)
    listener.current_opinion = opinion_o_j # Store numerical opinion

    # 5. Update memories (example for cumulative memory)
    speaker.memory += f"\n\n[Interaction at step t]: Sent message: '{message_x_i}'"
    listener.memory += f"\n\n[Interaction at step t]: Received message: '{message_x_i}'. My reaction and updated belief: '{report_r_j}'"

    # (Reflective memory would involve periodic summarization prompts)

    return i, j, message_x_i, report_r_j, opinion_o_j

Inherent Bias Towards Ground Truth

A primary finding was the strong tendency of the LLM agents (specifically RLHF-tuned models like ChatGPT) to converge towards the scientifically accepted "ground truth" on topics like climate change, regardless of their initial programmed opinions or the framing of the issue.

  • Convergence: Simulations initialized with diverse opinions, even those heavily skewed against the scientific consensus (e.g., all agents initially believing climate change is a hoax), consistently showed agents shifting towards acknowledging the reality of climate change. This resulted in a final mean opinion (Bias, BB) aligning with the ground truth.
  • Robustness: This occurred across different initial opinion distributions and framings (presenting the topic as true vs. false).
  • Source of Bias: Control experiments suggested this bias stems from the underlying LLM's training (likely RLHF, which optimizes for helpfulness and truthfulness according to human preferences) rather than solely emerging from the agent interactions. Querying the base LLM without the simulation context showed a similar tendency.
  • Asymmetry: The bias was stronger in refuting false statements (negative framing leading to strongly negative BB) than in endorsing true statements (positive framing leading to positive BB closer to neutral). This might reflect RLHF prioritizing the correction of misinformation.
  • Limitations for Simulation: This inherent "truth bias" poses a significant challenge for simulating real-world phenomena where individuals or groups persistently hold beliefs contrary to established facts (e.g., vaccine hesitancy, conspiracy theories). The agents struggle to realistically maintain non-factual viewpoints.

Inducing Confirmation Bias via Prompting

To explore if LLM agents could replicate known socio-psychological biases influencing opinion dynamics, the authors experimentally induced confirmation bias by modifying the agents' initial prompts.

  • Method: Specific instructions were added to the persona/memory prompt to encourage confirmation bias:
    • Weak Bias: "You are more likely to believe information that confirms your existing views and less likely to believe information that contradicts them."
    • Strong Bias: "You only believe information that confirms your existing views and completely dismiss information that contradicts them."
  • Effect on Dynamics: Introducing confirmation bias led to a systematic increase in the diversity (standard deviation, DD) of the final opinion distribution FoTF_o^T. Stronger induced bias resulted in greater opinion fragmentation (higher DD), preventing convergence to a single consensus.
  • Alignment with ABM: This result qualitatively replicates findings from traditional ABMs where implementing confirmation bias rules prevents consensus and leads to opinion clustering or polarization.
  • Significance: This demonstrates that, despite the inherent truth bias, LLM agent behavior can be modulated through prompt engineering to exhibit specific cognitive biases, influencing macroscopic simulation outcomes in predictable ways consistent with social science theory.

Implementation Considerations

Researchers aiming to implement similar simulations should consider:

  • LLM Choice: The paper predominantly used gpt-3.5-turbo-16k. The behavior, particularly the inherent bias, might differ significantly with other models (e.g., non-RLHF base models, models fine-tuned on specific discourse data).
  • Prompt Engineering: Crafting effective prompts for personas, interaction instructions, bias induction, and memory reflection is crucial and likely requires significant iteration. The exact wording can heavily influence agent behavior.
  • Opinion Classification: A reliable method (focf_{oc}) to map qualitative LLM outputs (verbal reports) to quantitative opinion scores is necessary for analysis. Fine-tuning a classifier like FLAN-T5-XXL requires labeled data or careful prompt-based classification design.
  • Computational Cost: Running simulations with multiple LLM agents over many time steps is computationally intensive and expensive due to repeated API calls. Each interaction step involves at least two LLM generation calls and one classification call. For N=10N=10 agents and T=20T=20 steps, this implies hundreds of LLM calls per simulation run.
  • Memory Management: Cumulative memory is simpler but risks exceeding context limits. Reflective memory requires more complex prompting for summarization but offers better scalability for longer simulations. The quality of reflection/summarization is critical.
  • Network Structure: The paper used random dyadic interactions (fully connected graph). Implementing more realistic network topologies (e.g., scale-free, small-world) or incorporating homophily (agents preferentially interacting with similar others) would require modifying the agent selection mechanism.

Limitations and Future Directions

The paper highlights several limitations and avenues for future research:

  • Model Dependency: Findings are potentially specific to the RLHF-tuned models tested. Further work is needed across diverse LLM architectures and training paradigms.
  • Opinion Representation: Mapping complex beliefs onto a single scalar score oito_i^t is a simplification. Future work could involve more sophisticated state representations or qualitative analysis of the generated text.
  • Topic Scope: The focus was on topics with a clear ground truth. Simulating debates on subjective, value-laden, or purely political issues presents different challenges.
  • Demographic and Network Effects: The influence of specific demographic attributes and network structure on opinion dynamics was not deeply explored and remains an area for future investigation.
  • Overcoming Truth Bias: The paper suggests that prompt engineering alone may be insufficient to create agents that realistically maintain diverse or inaccurate beliefs. The proposed solution is to move beyond prompting and fine-tune LLM agents on large datasets of real-world human discourse. This could imbue agents with more authentic, potentially biased, patterns of reasoning and communication, enabling more accurate simulations of phenomena like polarization and misinformation resilience.

Conclusion

Using LLM-based agents offers a promising, more expressive alternative to traditional ABMs for simulating opinion dynamics. The framework allows for nuanced interactions based on natural language and complex agent personas. However, current RLHF-trained LLMs exhibit a strong inherent bias towards ground truth, limiting their ability to model persistent disagreement or misinformation effects accurately. While prompt engineering can induce specific cognitive biases like confirmation bias, leading to outcomes like opinion fragmentation, overcoming the core truth bias likely requires fine-tuning agents on real-world discourse data. This approach presents significant computational costs and methodological challenges but holds potential for building more sophisticated and realistic models of social phenomena.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yun-Shiuan Chuang (14 papers)
  2. Agam Goyal (9 papers)
  3. Nikunj Harlalka (3 papers)
  4. Siddharth Suresh (11 papers)
  5. Robert Hawkins (5 papers)
  6. Sijia Yang (18 papers)
  7. Dhavan Shah (5 papers)
  8. Junjie Hu (111 papers)
  9. Timothy T. Rogers (15 papers)
Citations (33)
Youtube Logo Streamline Icon: https://streamlinehq.com