Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accountability of AI Under the Law: The Role of Explanation (1711.01134v3)

Published 3 Nov 2017 in cs.AI and stat.ML

Abstract: The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before---applications range from clinical decision support to autonomous driving and predictive policing. That said, there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation, and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. In this work, we review contexts in which explanation is currently required under the law, and then list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Finale Doshi-Velez (134 papers)
  2. Mason Kortz (2 papers)
  3. Ryan Budish (1 paper)
  4. Chris Bavitz (1 paper)
  5. Sam Gershman (4 papers)
  6. David O'Brien (3 papers)
  7. Kate Scott (1 paper)
  8. Stuart Schieber (1 paper)
  9. James Waldo (1 paper)
  10. David Weinberger (4 papers)
  11. Adrian Weller (150 papers)
  12. Alexandra Wood (2 papers)

Summary

We haven't generated a summary for this paper yet.