Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Regulating ChatGPT and other Large Generative AI Models (2302.02337v8)

Published 5 Feb 2023 in cs.CY and cs.AI

Abstract: Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. The paper argues for three layers of obligations concerning LGAIMs (minimum standards for all LGAIMs; high-risk obligations for high-risk use cases; collaborations along the AI value chain). In general, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.

Regulating ChatGPT and Other Large Generative AI Models: An Overview

The paper "Regulating ChatGPT and Other Large Generative AI Models," authored by Philipp Hacker, Andreas Engel, and Marco Mauer, addresses the pressing issue of regulatory frameworks for large generative AI models (LGAIMs) such as ChatGPT and GPT-4. With these models rapidly gaining prominence in various domains, the paper critically analyzes existing regulatory paradigms, with a focus on the European Union (EU), and lays out specific proposals for adapting regulations to the unique challenges posed by LGAIMs.

Technical and Regulatory Context

The authors begin by distinguishing LGAIMs from previous AI iterations, emphasizing their capacity to generate highly intricate content across text, image, video, and audio domains. Unlike traditional AI models, LGAIMs require extensive parameterization, vast datasets, and substantial computation, often raising issues of data quality and energy consumption. The technical grounding sets a factual basis for the ensuing discussions on legal and regulatory implications.

Critique of Existing Regulatory Approaches

The paper provides a cogent critique of the EU's current regulatory framework, including the AI Act and the Digital Services Act (DSA). The authors argue that these frameworks, which primarily address conventional AI models, falter when confronted with the implications of LGAIMs. Key among their criticisms is the overly broad definition of "general-purpose AI systems" (GPAIS) in the AI Act, which they argue is not sufficiently descriptive to differentiate systems demanding intensive risk management from those with limited functionalities.

Additionally, the paper underscores a critical oversight within the AI Act—its application burdens LGAIM developers with undue responsibilities that ignore the practical limitation of foreseeing and mitigating all potential high-risk applications. This, the authors argue, leads to inefficiencies and poses a risk of stifling innovation, particularly from smaller developers who cannot shoulder the compliance costs compared to tech giants like Microsoft/OpenAI and Google.

Proposal for a Contextual Regulation Framework

In response to these regulatory deficiencies, Hacker and colleagues propose an alternative model focused on:

  1. Terminological Clarity: The paper introduces a nuanced categorization within the AI development ecosystem—delineating roles as developers, deployers, professional and non-professional users, and recipients of AI output. This classification aims to ensure that regulatory obligations are distributed more equitably across the AI value chain, focusing compliance requirements on deployers in specific, high-risk contexts rather than blanket-coverage of all LGAIMs.
  2. Three-Tiered Regulatory Obligations: The authors recommend a system of layered obligations:
    • Minimum standards applicable to all LGAIMs, covering issues such as non-discrimination and transparency.
    • High-risk obligations triggered only in specific use-case scenarios, thus avoiding inefficiencies linked to model versatility.
    • Collaborative compliance frameworks within the AI value chain, particularly between developers and deployers.
  3. Content Moderation and Data Protection: Extending the DSA's content moderation provisions to LGAIMs is advocated, proposing notice and action mechanisms akin to those for social media platforms. This includes engaging trusted flaggers to preempt the dissemination of manipulated or harmful content. Moreover, data protection under GDPR stands as an essential pillar, especially in mitigating risks related to data inversion attacks intrinsic to LGAIMs.

Implications and Future Directions

The proposals presented in the paper underline a strategic regulatory shift from a model-centric to an application-centric framework. By focusing on specific applications with high-risk potential, the regulatory burden aligns more closely with actual societal impact, fostering an environment conducive to innovation and competition. The paper's call for technology-neutral regulations that adapt to future AI advancements presents a pragmatic approach to sustainable AI governance.

This working paper makes a significant contribution to the discourse on LGAIM regulation. The authors adeptly balance the need for regulatory oversight with the flexibility required to foster ongoing technological development, highlighting the urgency for policymakers to refine existing frameworks in light of emerging AI capabilities. As LGAIMs continue to evolve, regulatory agility and clarity will be paramount in ensuring these tools serve societal interests while guarding against their risks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Philipp Hacker (14 papers)
  2. Andreas Engel (28 papers)
  3. Marco Mauer (1 paper)
Citations (265)