Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representation Bias in Political Sample Simulations with Large Language Models (2407.11409v1)

Published 16 Jul 2024 in cs.CL

Abstract: This study seeks to identify and quantify biases in simulating political samples with LLMs, specifically focusing on vote choice and public opinion. Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao Dataset, and China Family Panel Studies to simulate voting behaviors and public opinions. This methodology enables us to examine three types of representation bias: disparities based on the the country's language, demographic groups, and political regime types. The findings reveal that simulation performance is generally better for vote choice than for public opinions, more accurate in English-speaking countries, more effective in bipartisan systems than in multi-partisan systems, and stronger in democratic settings than in authoritarian regimes. These results contribute to enhancing our understanding and developing strategies to mitigate biases in AI applications within the field of computational social science.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Weihong Qi (10 papers)
  2. Hanjia Lyu (53 papers)
  3. Jiebo Luo (355 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.