Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 86 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 129 tok/s Pro
GPT OSS 120B 430 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Value Alignment of Social Media Ranking Algorithms (2509.14434v1)

Published 17 Sep 2025 in cs.HC and cs.SI

Abstract: While social media feed rankings are primarily driven by engagement signals rather than any explicit value system, the resulting algorithmic feeds are not value-neutral: engagement may prioritize specific individualistic values. This paper presents an approach for social media feed value alignment. We adopt Schwartz's theory of Basic Human Values -- a broad set of human values that articulates complementary and opposing values forming the building blocks of many cultures -- and we implement an algorithmic approach that models and then ranks feeds by expressions of Schwartz's values in social media posts. Our approach enables controls where users can express weights on their desired values, combining these weights and post value expressions into a ranking that respects users' articulated trade-offs. Through controlled experiments (N=141 and N=250), we demonstrate that users can use these controls to architect feeds reflecting their desired values. Across users, value-ranked feeds align with personal values, diverging substantially from existing engagement-driven feeds.

Summary

  • The paper presents a framework that aligns feed ranking with human values via LLM-driven annotation using Schwartz's 19-value model, achieving a low MAE of 0.95.
  • It validates the approach through experiments where users recognized and configured feeds reflecting their prioritized values over standard engagement metrics.
  • The findings imply that incorporating human values into ranking algorithms can enhance user agency and mitigate filter bubbles on social media.

Value Alignment of Social Media Ranking Algorithms

Introduction

This paper presents a comprehensive framework for aligning social media feed ranking algorithms with human values, operationalized via Schwartz's theory of Basic Human Values. The authors argue that engagement-driven ranking is not value-neutral and tends to amplify individualistic, short-term values. They propose a method for classifying and integrating value expressions in social media posts, enabling user-driven value prioritization in feed curation. The approach is validated through controlled experiments, demonstrating that users can recognize and configure feeds that reflect their articulated values, with value-ranked feeds diverging substantially from engagement-based rankings.

Operationalizing Human Values in Feed Ranking

The core of the framework is the adoption of Schwartz's circumplex model of 19 basic human values, grouped into four clusters: self-transcendence, conservation, self-enhancement, and openness to change. This model provides structural coverage of the value design space, capturing both complementary and conflicting motivations. Figure 1

Figure 1: Visualization of Schwartz's 19 Basic Human Values, organized into four broader groups and mapped onto a circumplex to illustrate motivational proximities and tensions.

To annotate social media content, the authors employ LLMs, specifically GPT-4o, using few-shot prompting to rate the presence and magnitude of each value in tweets on a 0–6 scale. The LLM classifier achieves a mean absolute error (MAE) of 0.95±1.100.95 \pm 1.10 against consensus human labels, outperforming individual human annotators (1.07±1.051.07 \pm 1.05 MAE) for most values. This demonstrates that LLMs can reliably scale value annotation for large content inventories.

Value-Based Ranking Algorithm

The ranking algorithm integrates user-specified value weights with LLM-derived value labels for each post. Weights are assigned in [1,1][-1, 1] for each value, and the ranking score for a post is computed as the dot product of the value label vector and the weight vector. This linear aggregation allows for additive trade-offs and supports multi-value prioritization. Figure 2

Figure 2: Example feeds ranked by different value weight configurations, illustrating the impact of prioritizing "Achievement" and "Dominance" versus "Humility" and downranking "Face".

The system supports both implicit learning of user preferences and explicit user control via interfaces. The latter is implemented through slider-based controls, allowing users to adjust the prominence of each value in their feed.

Experimental Validation

Study 1: Single-Value Ranking

Participants (N=141N=141) were asked to identify which of two feeds—one engagement-ranked, one value-ranked by their top personal value—reflected the specified value. The recognizability rate was 76.1%76.1\%, significantly above random chance (p<0.001p < 0.001). Most values produced recognizable feeds, with the exception of "Interpersonal Conformity". Figure 3

Figure 3: Experimental platform rendering side-by-side feeds for recognizability assessment.

Study 2: Multi-Value, User-Controlled Ranking

A larger experiment (N=250N=250) evaluated recognizability when users could adjust 1–19 value sliders. Recognizability dropped as the number of sliders increased, but remained above chance (63.4%63.4\% for multi-value feeds). Complexity ratings did not significantly increase with more sliders, though qualitative feedback indicated some choice overload in the full-slider condition. Figure 4

Figure 4: User interface for adjusting value sliders to re-rank the feed.

Figure 5

Figure 5: Recognizability as a function of the number of sliders changed, showing a decline but remaining above random chance even for high-dimensional control.

Empirical Insights into Value Selection and Feed Composition

Analysis of user selections revealed that configured value weights were positively correlated with personal values (r=0.39±0.25r = 0.39 \pm 0.25), and that users often mapped topical interests to underlying values. Engagement feeds were found to prioritize individualistic values such as "Hedonism" and "Stimulation", with value-ranked feeds amplifying self-transcendence and openness to change values. Figure 6

Figure 6: Engagement feeds prioritize individual/personal values over societal ones, as measured by value strength across participants.

Figure 7

Figure 7: Value-ranked feeds amplify self-transcendence and openness to change values relative to engagement feeds.

The distribution of value expressions across tweets was sparse, with most tweets not containing any given value, but 94.4% of tweets were value-laden in at least one dimension. Figure 8

Figure 8: Distribution of labeled values across 212,663 tweets, showing sparsity and prevalence of "Stimulation" and "Hedonism".

Implications and Future Directions

Practical Implications

  • User Agency: The framework enables end-users to exert granular control over the values amplified in their feeds, addressing limitations of centralized, engagement-driven curation.
  • Algorithmic Diversity: Value-based ranking can counteract algorithmic monoculture and potentially mitigate filter bubbles by supporting pluralistic value configurations.
  • Scalability: LLM-based annotation is robust and scalable, supporting real-time value-based feed curation for large inventories.

Theoretical Implications

  • Value Trade-offs: The circumplex model supports nuanced trade-off navigation, but interface design must balance expressivity and cognitive load.
  • Generalizability: While Schwartz's values are comprehensive and cross-cultural, the method generalizes to other value systems (e.g., Moral Foundations Theory) with similar structural properties.
  • Subjectivity in Value Perception: Future work should address personalization of value classifiers to account for individual and cultural differences in value interpretation.

Limitations

  • Bias in LLMs: Potential overrepresentation of Western-centric values in LLM outputs remains a concern; pluralistic alignment methods are needed.
  • Content Inventory Constraints: The ability to satisfy user value preferences is limited by the available content; platform-level integration could alleviate this.
  • Longitudinal Effects: The studies were single-session; long-term impacts on user behavior and engagement require further investigation.
  • Cultural Generalizability: Experiments were US-centric; cross-cultural validation is necessary.

Conclusion

The paper demonstrates that social media feed ranking algorithms can be systematically aligned with human values using a scalable, LLM-driven annotation and ranking framework grounded in cultural psychology. Users can recognize and configure feeds that reflect their articulated values, with value-ranked feeds diverging substantially from engagement-based rankings. The approach offers a pathway for democratizing feed curation, supporting pluralistic value expression, and informing future research on value alignment in AI systems.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Explain it Like I'm 14

What is this paper about?

This paper asks a simple but important question: what if your social media feed showed you posts that match the values you care about, not just the things most likely to get clicks and likes? The authors build and test a way to reorder (rank) a feed so it highlights posts that express human values like caring, fairness, achievement, or respect—letting users choose which values to amplify or tone down.

What questions did the researchers ask?

They focused on four easy-to-understand goals:

  • Can we detect which human values a social media post is expressing?
  • Can we re-rank a feed based on the values a user cares about, instead of just engagement?
  • Will users notice and prefer feeds aligned with their chosen values?
  • What happens when users adjust more than one value at a time—do the feeds still feel meaningfully different?

How did they do it?

Think of the system like a music equalizer with sliders—each slider is a value. You move the sliders up for values you want more of (like “Caring”) and down for values you want less of (like “Domination”), and your feed reorders itself to fit your settings.

Step 1: A shared “value map”

They use Schwartz’s Basic Human Values, a well-studied set of 19 values found across many cultures. These values sit on a “circle” where nearby values are similar (like “Caring” and “Tolerance”) and opposite values can be in tension (like “Achievement” vs. “Humility”). The four big groups are:

  • Self-Transcendence (e.g., Caring, Universal Concern)
  • Openness to Change (e.g., Stimulation, Self-directed Actions)
  • Conservation (e.g., Tradition, Personal/Societal Security)
  • Self-Enhancement (e.g., Achievement, Power/Dominance, Resources)

Step 2: Tagging posts with value “stickers”

They used a LLM, similar to GPT-4, to read each post (including images and link previews) and assign scores for all 19 values:

  • 0 means the post doesn’t support that value (or even opposes it),
  • 1–6 means the value is present, from weak to strong.

Think of this like a super-fast librarian putting 19 “value stickers” on every post, with numbers showing how strongly each value appears.

They checked accuracy using a dataset where many humans had labeled thousands of posts. On average, the LLM’s labels were as good as or better than a typical human annotator compared to the group’s consensus.

Step 3: Re-ranking the feed with value sliders

Users set a weight for each value from -1 (show much less) to +1 (show much more). The system calculates a score for each post by combining:

  • the post’s value sticker strengths (0–6),
  • the user’s slider settings (-1 to +1), then sorts posts from highest to lowest total score.

In everyday terms: posts get “points” when they match your values and lose points when they go against your values. The feed shows the most points first.

Step 4: Experiments with real users and real feeds

They ran controlled online studies:

  • Study 1 (N=141): Participants installed a browser extension to collect posts from their own Twitter/X “For You” feed. They filled out a standard value survey (PVQ), then saw two feeds side-by-side—one re-ranked by one of their top values vs. the original engagement-based feed—and tried to pick which one matched the named value. This happened for four values per person.
  • Study 2 (mentioned in the paper’s overview): Users directly moved value sliders and saw the feed change live, then tried to identify the value-aligned feed in a blinded comparison. Across users, value-ranked feeds were very different from engagement feeds (low similarity, average Kendall’s τ ≈ 0.06).

What did they find?

  • The LLM could reliably label values in posts: Its average error compared to human consensus was slightly lower than a typical human’s error in most values.
  • People could recognize value-aligned feeds: In Study 1, participants correctly picked the value-aligned feed about 76% of the time (well above random guessing at 50%). Over a third got all four identifications right.
  • Most values were recognizable: Values like “Caring” and “Preservation of Nature” were highly recognizable. A few (like “Interpersonal Conformity”) were harder to spot. Values in the “Openness to Change” group (like “Self-directed Actions” and “Stimulation”) tended to be less recognizable, possibly because the engagement feed already leans toward novelty.
  • Value-tuned feeds diverged from engagement: When users pushed multiple sliders, the resulting feeds differed a lot from the platform’s engagement ranking. Users often set sliders to match their personal values, and the system reflected those choices.

Why this matters: It shows we don’t have to accept a one-size-fits-all, engagement-only feed. People can steer their feeds toward the values they care about.

What does this mean and why is it important?

  • More control for users: Instead of guessing what the algorithm wants, people can decide what matters to them—like more kindness and less vanity—and the feed can follow their lead.
  • Healthier online spaces: Aligning feeds to values could reduce problems linked to engagement-only ranking, like polarization, outrage-bait, or shallow novelty. It can encourage balance—e.g., more posts about helping others, community, or the environment—if users want that.
  • Flexible for platforms and communities: The method could be used by big platforms, community-run networks, or personal tools. It fits different governance styles (centralized or user-driven).
  • Important guardrails: There’s also a risk—powerful actors could push certain values to control narratives. The authors emphasize user choice, transparency, and pluralism (many value options, not just one) to reduce that risk.

In short, the paper shows a practical, tested way to re-rank social media feeds by human values. It gives people a clear, adjustable “equalizer” for what they see online, proving that value-aware feeds are both technically possible and meaningful to users.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Knowledge Gaps

Knowledge gaps, limitations, and open questions

Below is a single list of specific gaps and open questions that remain unresolved and could guide future research:

  • Generalizability beyond Twitter/X: Does the approach (classification and reranking) transfer to other platforms (Instagram, TikTok, Reddit, Mastodon/Bluesky) with different content formats, social graphs, and ranking pipelines?
  • Language and cultural coverage: How well do value classifications perform on non-English posts, code-switching, and in culturally diverse contexts where Schwartz’s values may manifest differently or be interpreted differently?
  • Ground-truth validity and annotator bias: The human-labeled dataset’s composition and annotator demographics are not detailed; inter-annotator reliability (e.g., Krippendorff’s alpha) and potential biases in consensus labels need to be reported and audited, especially across values and topics.
  • Differentiating neutral vs counter-value content: The 0–6 scale conflates “no value present” with “value contradicted”; should negative scores (e.g., −6 to +6) or separate labels capture active opposition to a value?
  • Magnitude calibration across values: Are scores comparable across values (e.g., is a “6” on Caring commensurate with a “6” on Achievement)? Methods for cross-value calibration and normalization are not provided.
  • Model uncertainty and abstention: No confidence estimates or abstain mechanisms are used; can uncertainty-aware classification reduce mislabeling and support human-in-the-loop review for borderline cases?
  • Multi-modal robustness: Performance on images, memes, videos, embedded links, and sarcasm/irony is not analyzed separately; how does accuracy vary by modality and genre?
  • Proprietary model dependency: The pipeline relies on GPT-4o; how stable are results across model versions, and can open-source models achieve comparable performance to ensure reproducibility and cost control?
  • Adversarial gaming risks: Could creators optimize content to trigger specific value labels (Goodhart’s law)? What defenses (adversarial training, behavioral monitoring) are effective against strategic manipulation?
  • Real-time scalability and cost: What are the latency, throughput, and cost implications of classifying and reranking at platform scale in near-real-time, including API rate limits and model inference constraints?
  • Privacy and data governance: What safeguards and retention policies apply to processing users’ feeds (including images and links) via a browser extension; how is PII handled, and can on-device inference mitigate risks?
  • Engagement trade-offs: How does value-aligned reranking affect key platform metrics (session length, retention, revenue) compared to engagement-driven ranking in realistic A/B deployments?
  • Longitudinal effects: Do value-aligned feeds affect well-being, trust, civic outcomes, or polarization over time? Are there value-drift dynamics where user preferences evolve with exposure?
  • Echo chamber risks and diversity: Does tailoring by values reduce viewpoint diversity or increase homophily? How should diversity constraints or exploration policies be incorporated?
  • Governance and misuse: What protections prevent platforms or governments from imposing specific values on populations? Which transparency, consent, and opt-out mechanisms are necessary?
  • Multi-stakeholder alignment: How are conflicts between individual, community, and societal values resolved (e.g., arbitration, multi-objective optimization, collective weighting)?
  • Preference elicitation methods: How do PVQ-derived weights compare to direct slider controls or implicit learning (from behavior) in accuracy, usability, and cognitive load? What onboarding aids reduce misinterpretation of values?
  • Non-linear trade-offs: The approach uses a linear dot product; do interaction effects between values require non-linear models (e.g., learned reward functions), and does linearity suffice empirically across complex preferences?
  • Circumplex-informed constraints: Can the circumplex geometry (adjacency and opposition) be explicitly encoded (e.g., regularization that penalizes opposites or couples adjacent values) to improve coherence and stability?
  • Inventory bias: Reranking only content pre-selected by the “For You” algorithm may inherit engagement biases; how do results differ on chronological or follow-graph inventories or when expanding content retrieval?
  • Platform integration: How should value scores be combined with engagement, quality, and safety signals in a multi-objective ranker? What weighting strategies and Pareto-front analyses perform best?
  • Multi-value recognizability thresholds: At what number/complexity of simultaneously-optimized values does recognizability degrade, and can explanations or interface aids preserve perceived coherence?
  • Explainability to users: Can per-post explanations (e.g., “ranked higher due to Caring and Humility”) improve trust, understanding, and control without overwhelming users?
  • Demographic fairness: Do classification errors or value-based reranking systematically advantage or disadvantage content from particular dialects, minority groups, or creators? How should fairness audits and mitigations be designed?
  • Topic coverage: How do values manifest in non-political domains (entertainment, hobbies, niche communities), and are there coverage gaps or systematic blind spots?
  • Video and live content: The pipeline’s support for video and live streams is not evaluated; how can temporal and audio cues be incorporated into value detection reliably?
  • Prompt stability and replication: How sensitive are labels to prompt wording and few-shot examples; can standardized prompts and public benchmarks reduce variance and enable reproducible comparisons?
  • Ranking stability: How do ties, score saturation, and weight granularity (−1 to 1 in 0.25 steps) affect rank volatility; are normalization or temperature-like controls needed?
  • Safety constraints: How to enforce guardrails (e.g., limit amplification of values when linked to harmful behaviors) without overriding user autonomy? What policy frameworks are appropriate?
  • Creator impacts: How will value-aligned ranking affect reach, monetization, and perceived fairness for creators across genres and demographics?
  • Code-switching and multilingual posts: How accurate is classification on bilingual content, transliteration, and mixed-modality text (e.g., alt text, captions)?
  • Interface design: Are slider-based UIs optimal for lay users; do alternative controls (presets, narratives, wizard flows) reduce error and effort?
  • Divergence metrics: Kendall’s τ quantifies order differences; what alternative metrics (e.g., coverage, topical diversity, value intensity distributions) better capture meaningful feed changes?
  • Legal and regulatory compliance: How does value-based reranking intersect with platform liability, transparency laws, and moderation mandates across jurisdictions?
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 posts and received 18 likes.

Youtube Logo Streamline Icon: https://streamlinehq.com

alphaXiv

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube