Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Normative Challenges of Risk Regulation of Artificial Intelligence and Automated Decision-Making (2211.06203v1)

Published 11 Nov 2022 in cs.CY

Abstract: Recent proposals aiming at regulating AI and automated decision-making (ADM) suggest a particular form of risk regulation, i.e. a risk-based approach. The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission. The article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved, i.e. risks to the protection of fundamental rights and fundamental societal values. They result mainly from the normative ambiguity of the fundamental rights and societal values in interpreting, specifying or operationalising them for risk assessments. This is exemplified for (1) human dignity, (2) informational self-determination, data protection and privacy, (3) justice and fairness, and (4) the common good. Normative ambiguities require normative choices, which are distributed among different actors in the proposed AIA. Particularly critical normative choices are those of selecting normative conceptions for specifying risks, aggregating and quantifying risks including the use of metrics, balancing of value conflicts, setting levels of acceptable risks, and standardisation. To avoid a lack of democratic legitimacy and legal uncertainty, scientific and political debates are suggested.

Normative Challenges of Risk Regulation of AI and ADM

The paper addresses the intricate normative challenges associated with regulating AI and automated decision-making (ADM) systems, specifically through a risk-based regulatory approach. The discussion is positioned within the framework of current legislative efforts, such as the European Commission's Artificial Intelligence Act (AIA).

Risk-Based Versus Rights-Based Approaches

The paper highlights the choice between a risk-based and a rights-based regulatory approach. A risk-based approach prioritizes regulatory activities based on perceived risk levels, which can be seen as resource-efficient but might fail to fully protect fundamental rights. In contrast, a rights-based approach enforces regulations consistently across contexts to ensure comprehensive rights protection, which may incur higher administrative costs.

Ambiguities in Fundamental Rights and Societal Values

A critical issue in AI and ADM regulation is the normative ambiguity related to fundamental rights like human dignity, justice, and the common good. These ambiguities demand careful normative choices concerning their interpretation and operationalization for risk assessments.

Examples include:

  • Human Dignity: The paper questions how human dignity can be quantified or protected in automated systems that rely on data-driven generalizations, which might reduce individual recognition.
  • Informational Self-Determination and Privacy: There's a need for new criteria adapted to AI capabilities, such as re-identification risks from non-personal data.
  • Justice and Fairness: Fairness metrics may overlook individualized justice, as they often focus on group fairness over individual rights.
  • Common Good: This concept, though emphasized in policy debates, lacks concrete principles for practical regulatory frameworks.

Importance of Operationalizing Risks

Effective risk regulation requires clear criteria to identify and manage risks to societal values and rights. However, ethical guidelines and existing legislation often lack the specificity needed for such operationalization, demanding further normative decisions to fill these gaps.

Implementation Challenges

The AIA proposes a hybrid governance approach, involving self-certification by providers, and oversight by standardization bodies. This could lead to democratic legitimacy concerns as these bodies might lack the required transparency and accountability.

Implications for Regulation

The dispersion of normative decision-making to various actors without explicit guidelines may lead to inconsistencies and potential arbitrariness in protecting fundamental rights. This necessitates a broader discourse on normative choices related to AI risks involving public participation and transparency.

Conclusion

The paper suggests that for AI and ADM regulation to be legitimate and effective, challenges related to normative ambiguities and the delegation of regulatory responsibilities need to be addressed. The emphasis is placed on ensuring that these choices reflect democratic processes and are informed by comprehensive societal discourse. This involves refining the balance between protecting fundamental rights and fostering innovation within AI systems. Future regulatory frameworks should thus evolve through ongoing scientific and public engagement, rather than remaining static.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Carsten Orwat (2 papers)
  2. Jascha Bareis (2 papers)
  3. Anja Folberth (2 papers)
  4. Jutta Jahnel (2 papers)
  5. Christian Wadephul (2 papers)