Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Filling gaps in trustworthy development of AI (2112.07773v1)

Published 14 Dec 2021 in cs.AI and cs.CY

Abstract: The range of application of AI is vast, as is the potential for harm. Growing awareness of potential risks from AI systems has spurred action to address those risks, while eroding confidence in AI systems and the organizations that develop them. A 2019 study found over 80 organizations that published and adopted "AI ethics principles'', and more have joined since. But the principles often leave a gap between the "what" and the "how" of trustworthy AI development. Such gaps have enabled questionable or ethically dubious behavior, which casts doubts on the trustworthiness of specific organizations, and the field more broadly. There is thus an urgent need for concrete methods that both enable AI developers to prevent harm and allow them to demonstrate their trustworthiness through verifiable behavior. Below, we explore mechanisms (drawn from arXiv:2004.07213) for creating an ecosystem where AI developers can earn trust - if they are trustworthy. Better assessment of developer trustworthiness could inform user choice, employee actions, investment decisions, legal recourse, and emerging governance regimes.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Shahar Avin (10 papers)
  2. Haydn Belfield (5 papers)
  3. Miles Brundage (22 papers)
  4. Gretchen Krueger (11 papers)
  5. Jasmine Wang (6 papers)
  6. Adrian Weller (150 papers)
  7. Markus Anderljung (29 papers)
  8. Igor Krawczuk (9 papers)
  9. David Krueger (75 papers)
  10. Jonathan Lebensold (9 papers)
  11. Tegan Maharaj (22 papers)
  12. Noa Zilberman (19 papers)
Citations (37)