Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

US-China perspectives on extreme AI risks and global governance (2407.16903v1)

Published 23 Jun 2024 in cs.CY and cs.AI

Abstract: The United States and China will play an important role in navigating safety and security challenges relating to advanced artificial intelligence. We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence, extreme risks from AI, and the potential for international cooperation. Specifically, we compiled publicly-available statements from major technical and policy leaders in both the United States and China. We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI), that may have the most significant impacts on national and global security. Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control. Both countries have also launched early efforts designed to promote international cooperation around safety standards and risk management practices. Notably, our findings only reflect information from publicly available sources. Nonetheless, our findings can inform policymakers and researchers about the state of AI discourse in the US and China. We hope such work can contribute to policy discussions around advanced AI, its global security threats, and potential international dialogues or agreements to mitigate such threats.

US-China Perspectives on Extreme AI Risks and Global Governance

The paper "US-China perspectives on extreme AI risks and global governance" by Akash R. Wasil and Tim Durgin provides a critical analysis of the safety and security challenges concerning advanced AI from the perspectives of these two leading nations. The analysis is founded on publicly available statements from major technical and policy leaders in the United States and China, with a focus on advanced AI forms like artificial general intelligence (AGI), which have significant implications for national and global security.

Safety and Security Concerns

China's Perspective

In China, there is a pronounced acknowledgment of long-term security risks associated with advanced AI. The discourse highlights potential dangers such as intelligence explosions, self-replication, deception, and AGI-related security risks. A notable aspect is the government's concern about AI generating "unhealthy" or "illegal" information. Influential Chinese scholars have raised alarms about potential existential risks posed by advanced AI systems. The emphasis on content alignment with socialist values reflects a unique aspect of the Chinese approach.

United States' Perspective

In contrast, the United States discourse similarly addresses long-term security risks, focusing on potential catastrophic outcomes and loss of control scenarios associated with AGI. There is strong concern over both malicious use of advanced AI and unintended control issues. US policymakers are increasingly interested in understanding AGI risks, indicating a proactive approach to developing regulatory frameworks. The need for a standardized definition of AGI and a framework for addressing its risks are emphasized.

International Cooperation and Global Governance

China's Proposals

China advocates for international cooperation on AI risk management, emphasizing tiered testing systems with requirements based on AI risk levels. There is a strong focus on ensuring fair and equal access to AI technologies, highlighting China’s global inclusivity stance. The establishment of international AI governance institutions under United Nations frameworks is supported.

United States' Initiatives

The United States actively pursues international collaboration on AI standards and risk management practices. Legislative initiatives such as the Future of AI Innovation Act highlight alliances with like-minded nations. The US AI Safety Institute emphasizes global cooperation, aiming to create common AI safety methodologies, signifying a commitment to international governance.

Joint International Perspectives

Joint statements from US and Chinese representatives underscore shared recognition of AI's safety and security threats. Significant documents, like the Bletchley Declaration, stress the importance of international scientific and governmental coordination to mitigate extreme AI risks. There is a consensus on the necessity for clear regulations and operational guidelines to manage AI hazards on a global scale.

Implications and Future Directions

The paper provides an insightful comparison of US and Chinese perspectives on AI, reflecting both common concerns and unique national priorities. The shared recognition of extreme AI risks by both countries suggests a foundation for potential cooperation in developing global standards and governance structures. Future developments may see increased collaboration in formulating comprehensive international AI policies, creating a balanced approach that addresses technological advancements while mitigating associated risks.

This work illuminates the current state of AI discourse in two major global players, offering insights that could inform policymakers and contribute to a more coordinated response to the challenges posed by advances in AI.

Given the rapid evolution of AI technology, continued analysis and dialogue are essential to ensure AI developments remain beneficial and secure for all stakeholders involved.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Akash Wasil (5 papers)
  2. Tim Durgin (1 paper)
Youtube Logo Streamline Icon: https://streamlinehq.com