Papers
Topics
Authors
Recent
2000 character limit reached

AAPA: Antidemocratic Attitudes & Partisan Animosity

Updated 5 December 2025
  • The paper introduces AAPA by defining eight empirically validated factors that quantify partisan hostility and support for undemocratic practices.
  • It operationalizes AAPA using large-scale survey data and LLM-driven classification to score political posts for algorithmic intervention.
  • Results show that tailored feed re-ranking based on AAPA scores effectively moderates negative affect and supports democratic discourse without hurting engagement.

Antidemocratic Attitudes and Partisan Animosity (AAPA) constitute a multidimensional construct capturing expressions in political discourse that simultaneously undermine democratic norms and intensify affective polarization. Codified through large-scale political psychology measurement and operationalized for algorithmic intervention in social media, AAPA serves as both a theoretical lens and a practical objective for automated content moderation and ranking. Recent experimental literature demonstrates the causal impacts of AAPA exposure on out-party affect, immediate emotional response, and the feasibility of AI-driven mitigation without adverse effects on engagement metrics (Piccardi et al., 22 Nov 2024, Jia et al., 2023).

1. Conceptualization and Measurement

AAPA is systematically defined through eight empirically validated factors, derived from political science survey mega-studies. The factors are:

  1. Partisan animosity: dislike of the opposing party
  2. Support for undemocratic practices: willingness to forgo democratic procedure for partisan ends
  3. Support for partisan violence: endorsement of physical force against political adversaries
  4. Support for undemocratic candidates: preference for candidates challenging democratic norms
  5. Opposition to bipartisan cooperation
  6. Social distrust
  7. Social distance
  8. Biased evaluation of politicized facts

Each post is annotated using LLMs (e.g., GPT-3.5, GPT-4) in a zero-shot or few-shot classification paradigm. The factors are detected by binary (or ordinal) coding, typically producing a vector v=(v1,v2,...,v8)\mathbf{v} = (v_1, v_2, ..., v_8) per post, where vi{0,1}v_i\in \{0,1\} (binary) or vi{1,2,3}v_i\in\{1,2,3\} (ordinal in manual/LLM codebook-driven studies). The aggregate AAPA score for post jj is:

AAPA_scorej=i=18vij\mathrm{AAPA\_score}_j = \sum_{i=1}^8 v_{ij}

For binary vectors, this yields a score in [0,8][0,8]; for 1–3 ordinal coding, in [8,24][8,24] (Piccardi et al., 22 Nov 2024, Jia et al., 2023).

A post is typically labeled as "AAPA" if its score meets or exceeds a context-specific threshold (e.g., 4\geq 4 for binary coding).

2. Algorithmic Intervention and Feed Re-Ranking

AAPA scoring enables explicit algorithmic manipulation of social media feeds:

  • Input Filtering: Posts are first screened using a fine-tuned LLM (e.g., RoBERTa with F10.93F_1\approx 0.93) for political relevance—defined as content explicitly concerning politicians, policy, social issues, news, or events.
  • LLM Factor Annotation: Each filtered post is then classified for all eight AAPA subdimensions by an LLM, resulting in presence/absence (YES/NO) per factor, with caching for identical text to control API cost and latency.
  • Score-Based Reordering: Depending on the intervention arm, posts with high AAPA scores are downranked (added penalty proportional to position and severity), inserted (sampled from a high-AAPA inventory), warn-blurred, or removed entirely.

For example, in reduced-exposure conditions, the penalty for each AAPA post is:

penalty=index(p)×AAPA_score(p)×10\text{penalty} = \text{index}(p) \times \text{AAPA\_score}(p) \times 10

Downranking typically affects \sim85 posts/day/user (median 22); up-ranking via insertion introduces \sim11 posts (median 5.4) per user per day (Piccardi et al., 22 Nov 2024).

3. Experimental Designs and Statistical Analysis

RCT field deployments have randomized thousands of U.S. partisan users to sustained (7–10 day) feed interventions with:

  • Pre-baseline Measurement: 3-day unaltered feed to establish base metrics (AAPA score exposure, affect, engagement).
  • Treatment Arms:
    • Reduced-Exposure (downrank high-AAPA posts)
    • Increased-Exposure (insert high-AAPA posts from out-of-feed inventory)
    • Control (original feed ordering)

Daily and sessional controls are enforced by a browser extension intercepting each feed render. Both in-feed (immediate; 0–100 slider) and post-experiment affect and emotion surveys (frequency metrics) are injected at consistent slots (Piccardi et al., 22 Nov 2024).

Outcome Variables:

  • Affective polarization: Out-party thermometer (0=cold, 100=warm)
  • Emotional response: Immediate (anger, sadness, excitement, calm) and post-trial (frequency of 15 emotions)
  • Engagement: Sessions, time-on-platform, reshare/favorite/reply rates

Statistical Modeling:

  • Post-experiment: yipost=β0+β1Treatmenti+β2yipre+β3Platformi+ϵiy_i^\text{post} = \beta_0 + \beta_1 \cdot \mathrm{Treatment}_i + \beta_2 \cdot y_i^\text{pre} + \beta_3 \cdot \mathrm{Platform}_i + \epsilon_i (OLS)
  • In-feed: yit=β0+β1Treatmenti+β2yi+β3Platformi+ui+ϵity_{it} = \beta_0 + \beta_1 \cdot \mathrm{Treatment}_i + \beta_2 \cdot \overline{y}_i + \beta_3 \cdot \mathrm{Platform}_i + u_i + \epsilon_{it} (LME)
  • All CIs are 95%, with sharpened FDR correction where preregistered (Piccardi et al., 22 Nov 2024, Jia et al., 2023).
Intervention Main Effect on Out-Party Feeling (°C) Main Effect Size (Cohen's d) Engagement Impact
Reduced AAPA Exposure +2.11 [0.15, 4.06] –.25 Null
Increased AAPA –2.48 [–4.79, –0.17] –.25 Null
Remove/Replace (manual) –.20 Null

4. Effects on Affective Polarization and Emotion

Field and platform-mimicking experiments demonstrate:

  • Reduced AAPA exposure causally increases out-party warmth post-experiment (β₁=+2.11°, p=0.035p=0.035) and in-feed (β₁=+3.24°, p=0.002p=0.002).
  • Increased AAPA exposure decreases out-party warmth (β₁=–2.48°, p=0.036p=0.036; in-feed β₁=–2.56°, p=0.011p=0.011), corresponding to ~3.6 years of affective polarization change in U.S. time series.
  • Emotional response: Immediate increases in negative affect (anger Δ=+5.13, sadness Δ=+4.38 on 0–100 slider, p<0.04p<0.04) under increased AAPA; decreases (anger Δ=–5.05, sadness Δ=–3.68) under reduced AAPA. Positive emotion metrics not significantly affected.
  • Engagement metrics: No significant effect on sessions, favorites, reshares, or platform dwell time (all p>0.10p>0.10), suggesting no adverse business trade-off (Piccardi et al., 22 Nov 2024, Jia et al., 2023).

5. Societal Objective Functions and Value Alignment

AAPA operationalization enables the explicit insertion of societal objective functions into ranking architectures (Jia et al., 2023). By penalizing high-AAPA content in the ranking score (e.g., Scombined(p)=Seng(p)λAAPA(p)S_\text{combined}(p) = S_\text{eng}(p) - \lambda\cdot\mathrm{AAPA}(p)), platforms can surface content less likely to erode democratic norms or amplify partisan hostility, without measurable loss in engagement.

Experiments with both manual and LLM-driven annotation demonstrate:

  • Robust interrater agreement for both manual and GPT-4 automated labels (ρ=0.75\rho = 0.75, Krippendorff's α=0.78\alpha = 0.78–0.895).
  • Replicable affective polarization reductions across manual and LLM-driven ranking (downranking: d.25d \approx -.25).
  • Effects concentrated among weak partisans; strong partisans less responsive.

The pipeline of survey construct → codebook → LLM classifier → feed intervention constitutes a generalizable model for encoding and deploying societal objectives into recommender and ranking systems, pending construct validity and LLM agreement.

6. Limitations and Considerations

  • Sample and exposure scope: Most experimental samples are U.S. partisan users; external validity to other samples or mixed-content feeds unknown.
  • Intervention duration: Demonstrated effects pertain primarily to short-to-medium horizon (single session to 10 days).
  • Engagement and secondary metrics: Consistently null effects, but intervention on other content axes may present trade-offs (e.g., diversity, well-being).
  • Model risks: Domain generalization, prompt injection, model drift, and LLM bias remain open technical hazards for deploying automated AAPA annotation at scale.
  • Normative trade-offs: Content warnings, while effective in some arms, introduce substantial freedom-of-speech concerns; downranking and removal less so (Jia et al., 2023).

7. Implications for Platform Design and Democratic Health

There is robust causal evidence that algorithmic exposure to content high in AAPA directly shapes affective polarization and negative emotion, independent of engagement. Automated re-ranking driven by explicit societal objectives—detecting and downweighting AAPA content—offers a technically feasible intervention for mitigating affective polarization and cultivating conditions more conducive to democratic discourse (Piccardi et al., 22 Nov 2024, Jia et al., 2023). A plausible implication is that recommender system designers can incorporate validated societal objectives (e.g., AAPA, well-being, misinformation) alongside engagement to better align output with societal values.

Embedding LLM-driven AAPA detectors within ranking protocols represents a scalable strategy for recalibrating large-scale information environments in line with the preservation of democratic norms.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Antidemocratic Attitudes and Partisan Animosity (AAPA).