Papers
Topics
Authors
Recent
Search
2000 character limit reached

Interpreting Social Respect: A Normative Lens for ML Models

Published 1 Aug 2019 in cs.CY | (1908.07336v1)

Abstract: Machine learning is often viewed as an inherently value-neutral process: statistical tendencies in the training inputs are "simply" used to generalize to new examples. However when models impact social systems such as interactions between humans, these patterns learned by models have normative implications. It is important that we ask not only "what patterns exist in the data?", but also "how do we want our system to impact people?" In particular, because minority and marginalized members of society are often statistically underrepresented in data sets, models may have undesirable disparate impact on such groups. As such, objectives of social equity and distributive justice require that we develop tools for both identifying and interpreting harms introduced by models.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.