Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Quantitative AI Risk Assessments: Opportunities and Challenges (2209.06317v3)

Published 13 Sep 2022 in cs.AI

Abstract: Although AI systems are increasingly being leveraged to provide value to organizations, individuals, and society, significant attendant risks have been identified and have manifested. These risks have led to proposed regulations, litigation, and general societal concerns. As with any promising technology, organizations want to benefit from the positive capabilities of AI technology while reducing the risks. The best way to reduce risks is to implement comprehensive AI lifecycle governance where policies and procedures are described and enforced during the design, development, deployment, and monitoring of an AI system. Although support for comprehensive governance is beginning to emerge, organizations often need to identify the risks of deploying an already-built model without knowledge of how it was constructed or access to its original developers. Such an assessment will quantitatively assess the risks of an existing model in a manner analogous to how a home inspector might assess the risks of an already-built home or a physician might assess overall patient health based on a battery of tests. Several AI risks can be quantified using metrics from the technical community. However, there are numerous issues in deciding how these metrics can be leveraged to create a quantitative AI risk assessment. This paper explores these issues, focusing on the opportunities, challenges, and potential impacts of such an approach, and discussing how it might influence AI regulations.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. David Piorkowski (17 papers)
  2. Michael Hind (25 papers)
  3. John Richards (16 papers)
Citations (11)