Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
173 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

2025 Singapore Conference on AI (SCAI)

Updated 1 July 2025
  • The 2025 Singapore Conference on AI (SCAI) is a global forum addressing the trajectory, safety, policy, and impact of advanced AI, bringing together researchers, policymakers, and civil society.
  • A major outcome of SCAI 2025 was the adoption of the Singapore Consensus, establishing a defense-in-depth model for AI safety research across assessment, development, and control.
  • SCAI 2025 emphasized the importance of standardized evaluation through initiatives like the AI Idea Bench and the AGILE Index for assessing national AI governance maturity.

The 2025 Singapore Conference on AI (SCAI) is a significant international scientific forum focused on the trajectory, safety, policy, and real-world impact of advanced artificial intelligence. It brings together AI researchers, practitioners, governmental representatives, and civil society actors to address the most pressing technical, ethical, and governance challenges arising from rapid progress in general-purpose and transformative AI systems. The 2025 meeting is particularly distinguished by its synthesis of global perspectives, its emphasis on AI safety research priorities, and its catalysis of benchmarking and evaluation initiatives at the interface of science, policy, and society.

1. Historical Context and Purpose

SCAI 2025 builds on a growing legacy of AI conference activity in Singapore, reflecting the city-state’s emergence as a major hub for AI innovation and thought leadership. The 2025 meeting formalized its international orientation by convening a special “International Scientific Exchange on AI Safety,” directly supporting the production of the Singapore Consensus on Global AI Safety Research Priorities. The conference also responded to accelerating transformative trends in AI—such as the rise of LLMs, advances in multi-agent systems, and mounting societal reliance on AI in both the public and private sectors.

SCAI's agenda is closely informed by recent research surveys that forecast an imminent jump in automatable human work (with medians suggesting 40% of human economic tasks may be automatable by 2028 and 60% by 2033 (1901.08579); at least a 50% probability of AI autonomously building complex technical systems and generating human-expert-level creative output by 2028 (2401.02843)), as well as a rising probability assigned by experts to both highly beneficial and catastrophic outcomes.

2. The Singapore Consensus on AI Safety Research

A centerpiece of SCAI 2025 is the adoption of the Singapore Consensus on Global AI Safety Research Priorities (2506.20702). This document, refined in collaboration with the International AI Safety Report (IAISR) chaired by Yoshua Bengio and backed by 33 governments, articulates a defense-in-depth model that organizes AI safety research into three domains: Assessment, Development, and Control.

  • Assessment: Encompasses techniques for risk identification, system audit, forecasting downstream social impacts, model red-teaming, and rigorous metrology for risk quantification, following the formal structure:

SAI=f(AriskAssessment,DtrustDevelopment,CcontrolControl)\mathcal{S}_{AI} = f\Big(\underbrace{A_{risk}}_{\text{Assessment}},\,\underbrace{D_{trust}}_{\text{Development}},\,\underbrace{C_{control}}_{\text{Control}}\Big)

  • Development: Focuses on reliable specification, robust design, formal verification, and validation of AI behaviors, integrating advances in interpretability and the synthesis of safe programs and world models. Key technical themes include adversarial robustness, safe pretraining, and the avoidance or limitation of hazardous capabilities by reducing agency, generality, or intelligence where feasible.
  • Control: Involves real-time system monitoring (hardware, software, and user interfaces), reset or override mechanisms (“off-switches”), scalable and layered oversight architectures, incident response protocols, and the embedding of socio-technical mechanisms for accountability and resilience against emergent or unforeseen failure modes.

The Consensus emphasizes that these layers must operate in concert, as no single layer can provide complete assurance in isolation.

3. Benchmarking, Evaluation, and Data Initiatives

SCAI 2025 is notable for highlighting the growing importance of standardized evaluation and benchmarking tools across the AI research community. Key examples discussed and showcased include:

  • AI Idea Bench 2025 (2504.14191): A benchmark for assessing LLM-generated research ideas, with a dataset of 3,495 post-training-cutoff papers, enabling reference-aligned evaluation and quantification of creativity, feasibility, and novelty.
  • SCAI-QReCC Shared Task (2201.11094): Advanced methodologies for evaluating conversational question answering, with an extended dataset supporting multiple correct answer references and human-centered plausibility/faithfulness judgments.
  • The AGILE Index (2502.15859): An international index for assessing national AI governance maturity and effectiveness, with 39 indicators mapped onto four foundational pillars and cross-country comparisons for policy benchmarking.

These resources embody SCAI’s commitment to rigorous, empirical evaluation across technical and governance domains.

4. Policy, Governance, and Global AI Regulation

SCAI fosters dialogue on a wide spectrum of AI governance philosophies and regulatory systems, situating Singapore’s model as an adaptive, self-regulatory, and internationally aligned approach (2504.19264). This framework emphasizes:

  • Advisory, industry-focused guidance (Model AI Governance Framework), facilitating voluntary best practice adoption.
  • Strong alignment with international norms (OECD, EU principles), promoting global interoperability.
  • A pragmatic balance between protecting human rights and supporting rapid technological innovation—contrasting Europe’s risk-based, mandatory compliance (EU AI Act), the U.S.’s innovation-driven but less centralized environment, and China’s state-centric, security-focused regime.

SCAI also draws on the AGILE Index for cross-national governance benchmarking, revealing Singapore’s balanced scores and leadership in inclusive, transparent, and effective AI governance systems.

5. Societal Impacts, Sustainability, and Equity

The conference agenda directly addresses the anticipated social, economic, and environmental consequences of advanced AI deployment:

  • Surveys analyzed at SCAI suggest that automation of a majority of paid human work may occur far faster than most governmental or industrial plans acknowledge (1901.08579, 2401.02843), necessitating urgent scenario planning for labor markets, education, welfare, and broader social change.
  • Sustainability frameworks, especially the SCAIS Framework (2306.13686), provide 19 criteria and 67 indicators for measurable assessment across ecological, social, economic, and governance axes. SCAI encourages these frameworks for responsible AI adoption and to support regulatory or voluntary compliance.
  • Equity-focused technical studies, such as the analysis of fairness in desk-rejection policies (2502.00690), identify mathematical and operational vulnerabilities in conventional academic evaluation systems, with new, optimization-based mechanisms recommended for greater social justice in research culture.

SCAI also foregrounds the intersection of AI governance and human rights, regional policy adaptation (notably in Southeast Asian health security (2411.14435)), and sustainability as necessary research and policy priorities.

6. Technical Advances in Multi-Agent and Human-Centered AI

Technical discourse at SCAI 2025 emphasizes advances beyond single-agent autonomous intelligence. Highlights include:

  • Application of advanced game-theoretic tools for multi-agent AI, reflecting real-world complexities—such as dynamic coalition formation, language-based payoffs, sabotage risk, and Bayesian adversarial detection—modeled with sophisticated mathematical formalisms (2506.17348). These developments are crucial for aligning distributed AI systems in partially adversarial or uncertain environments.
  • Embodied AI for real-world social interaction, exemplified by studies on android interviewers capable of attentive, adaptive, and inclusive dialogue at international conferences (2412.09867). Detailed user studies validate both the technical feasibility and human engagement potential for such systems, highlighting new opportunities and challenges in event automation, inclusivity, and trustworthy human-AI interaction.

7. Anticipated Impact and Future Directions

SCAI 2025 consolidates Singapore’s position as a locus for global AI research, governance dialogue, and techno-policy innovation. The conference’s outcomes are expected to:

  • Influence international and regional AI safety, policy, and evaluation initiatives, with the Singapore Consensus serving as a blueprint for future standard-setting and collaborative research.
  • Accelerate practical adoption of robust, standardized frameworks (e.g., AGILE Index, SCAIS, AI Idea Bench) for technical and policy assessment.
  • Promote ongoing global scientific exchange and scenario planning, prioritizing both the opportunities and risks anticipated by the rapid advance of transformative AI systems.

SCAI’s proceedings, consensus documents, and benchmarking resources establish an authoritative reference for researchers, policymakers, and practitioners navigating the future of AI.