- The paper establishes compute as a uniquely governable asset due to its properties of detectability, excludability, quantifiability, and supply chain concentration.
- It demonstrates how targeted compute governance can improve visibility via mandatory reporting, steer resource allocation, and enforce regulations through hardware-based controls.
- The paper highlights potential risks such as privacy issues and economic impacts, urging careful policy design to mitigate centralization and other challenges.
Computing Power and the Governance of Artificial Intelligence
The paper "Computing Power and the Governance of Artificial Intelligence" provides a comprehensive discussion on the potential of leveraging computing power as a strategic tool in AI governance. It highlights the importance of compute as a pivotal input in AI development, especially for frontier models, and explores how policymakers might effectively regulate this resource. The authors argue that compute governance can significantly enhance regulatory capacities in three primary areas: visibility, allocation, and enforcement.
Key Themes and Arguments
Importance and Feasibility of Compute Governance
The paper argues that compute is uniquely governable due to four key properties: detectability, excludability, quantifiability, and supply chain concentration. Unlike other AI inputs such as data and algorithms, which are intangible and less controllable, compute is a physical, quantifiable asset produced via a concentrated supply chain. These properties make computing hardware an attractive target for governance.
The exponential growth in compute usage for AI model training underscores its centrality to the development of cutting-edge AI systems. The paper emphasizes that compute investments have reliably resulted in capability improvements, supported by the predictive power of scaling laws. This trend points to the potential of compute as a leverage point for influencing the development trajectory of AI technologies.
Enhancing Governance Capacities
Visibility
The authors discuss several mechanisms to enhance visibility into AI development through compute governance. These include mandatory reporting of compute usage, development of an international AI chip registry, and privacy-preserving workload monitoring. Such measures could help policymakers identify and assess the capabilities of actors in AI development, enabling informed decision-making and international coordination.
Allocation
The paper suggests that governance could actively steer AI development toward beneficial outcomes by redistributing compute resources. This could involve subsidizing access to compute for socially beneficial AI research or reallocating resources to reduce global disparities in compute availability. This approach aligns with the concept of differential technological development, prioritizing advancements that mitigate risks.
Enforcement
Compute governance can directly support enforcement of AI regulations through technically-enabled mechanisms. Proposals include enforcing "compute caps" via physical limits on networking capabilities and employing hardware-based remote enforcement. Such approaches could thwart violations of regulatory norms at a technical level, deterring risky or malicious development activities.
Risks and Limitations
While promoting the benefits of compute governance, the paper also emphasizes inherent risks and limitations. Potential threats include violations of privacy, economic repercussions, and centralization of control. Algorithmic and hardware advancements may also diminish the efficacy of compute-focused interventions over time. The authors propose several guardrails to mitigate these risks, such as focusing on large-scale AI compute and systematically reviewing governance measures.
Implications and Future Considerations
The analysis of compute governance in this paper illustrates its potential as a pivotal strategy in AI regulation. By appropriately leveraging compute, policymakers might sustainably manage the risks and rewards associated with AI development. While compute governance alone cannot address all challenges in AI regulation, it offers a foundation for interventions aimed at promoting safety and equitable access to AI technologies.
Future research must address the evolving dynamics of compute production and usage, as well as the impact of technological advancements on governance strategies. Successful compute governance will likely require a combination of technological innovation, international cooperation, and adaptive policymaking. The continued exploration of these themes will be crucial as AI technologies grow ever more integral to societal progress.