Auditing Political Exposure Bias: Algorithmic Amplification on Twitter/X During the 2024 U.S. Presidential Election (2411.01852v3)
Abstract: Approximately 50% of tweets in X's user timelines are personalized recommendations from accounts they do not follow. This raises a critical question: What political content are users exposed to beyond their established networks, and what implications does this have for democratic discourse online? In this paper, we present a six-week audit of X's algorithmic content recommendations during the 2024 U.S. Presidential Election by deploying 120 sock-puppet monitoring accounts to capture tweets from their personalized "For You" timelines. Our objective is to quantify out-of-network content exposure for right- and left-leaning user profiles and assess any potential inequalities and biases in political exposure. Our findings indicate that X's algorithm skews exposure toward a few high-popularity accounts across all users, with right-leaning users experiencing the highest level of exposure inequality. Both left- and right-leaning users encounter amplified exposure to accounts aligned with their own political views and reduced exposure to opposing viewpoints. Additionally, we observe that new accounts experience a right-leaning bias in exposure within their default timelines. Our work contributes to understanding how content recommendation systems may induce and reinforce biases while exacerbating vulnerabilities among politically polarized user groups. We underscore the importance of transparency-aware algorithms in addressing critical issues such as safeguarding election integrity and fostering a more informed digital public sphere.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.