Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpreting Social Respect: A Normative Lens for ML Models (1908.07336v1)

Published 1 Aug 2019 in cs.CY

Abstract: Machine learning is often viewed as an inherently value-neutral process: statistical tendencies in the training inputs are "simply" used to generalize to new examples. However when models impact social systems such as interactions between humans, these patterns learned by models have normative implications. It is important that we ask not only "what patterns exist in the data?", but also "how do we want our system to impact people?" In particular, because minority and marginalized members of society are often statistically underrepresented in data sets, models may have undesirable disparate impact on such groups. As such, objectives of social equity and distributive justice require that we develop tools for both identifying and interpreting harms introduced by models.

Citations (2)

Summary

We haven't generated a summary for this paper yet.