Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What are you optimizing for? Aligning Recommender Systems with Human Values (2107.10939v1)

Published 22 Jul 2021 in cs.IR, cs.CY, and cs.LG

Abstract: We describe cases where real recommender systems were modified in the service of various human values such as diversity, fairness, well-being, time well spent, and factual accuracy. From this we identify the current practice of values engineering: the creation of classifiers from human-created data with value-based labels. This has worked in practice for a variety of issues, but problems are addressed one at a time, and users and other stakeholders have seldom been involved. Instead, we look to AI alignment work for approaches that could learn complex values directly from stakeholders, and identify four major directions: useful measures of alignment, participatory design and operation, interactive value learning, and informed deliberative judgments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jonathan Stray (9 papers)
  2. Ivan Vendrov (6 papers)
  3. Jeremy Nixon (8 papers)
  4. Steven Adler (5 papers)
  5. Dylan Hadfield-Menell (54 papers)
Citations (50)