- The paper proposes a framework where AI must execute formal conjecture and proof, manipulate ontological concepts, and integrate mathematics to generate revolutionary theories.
- It categorizes three epistemic perspectives—accepting black-box predictions, employing explainability strategies, and pursuing intelligible mathematical theories—to address AI’s role in science.
- The introduction of Galilean intelligibility offers a metric based on the ratio of empirical constants to ontological variables to assess the transparency of AI-generated theories.
A Moonshot for AI Oracles in the Sciences
The Perspective titled "A Moonshot for AI Oracles in the Sciences" by Bryan Kaiser et al. addresses the evolving role of AI, specifically AI oracles, in scientific research. The paper builds upon historical skepticism, notably the views expressed by Philip Anderson and Elihu Abrahams, regarding the capability of machines to generate revolutionary scientific theories. The authors propose a framework that outlines necessary conditions for AI to achieve such revolutionary outputs, which they term a "moonshot" challenge.
AI Oracles and the Oracular Crisis
The paper initiates the discussion by acknowledging the significant advancements in AI, particularly in deep learning, that have begun to transform scientific research practices. The authors consider AI oracles—black-box algorithms that make highly accurate predictions without necessarily providing intelligible explanations—as a primary focus. They argue that the rise of these oracles has precipitated a new form of Kuhnian crisis, which they term the "oracular crisis". This crisis diverges from traditional scientific crises that emerge from conflicts between empirical data and prevailing theories. Instead, it stems from the challenge scientists face in interpreting and understanding the internal logic of AI algorithms that make superhuman predictions.
Epistemic Perspectives on AI Oracles
Kaiser et al. identify three emerging epistemic perspectives in response to this oracular crisis:
- The Oracular Epistemic Perspective: This perspective accepts AI oracles as black boxes and leverages their predictive capabilities without attempting to understand the underlying mechanisms. Such a post-anthropocentric view prioritizes predictive accuracy over intelligibility.
- The XAI/IAI Epistemic Perspective: This approach aims to reduce the epistemic opacity of AI oracles by employing Interpretable AI (IAI) and Explainable AI (XAI) methodologies. These include ante hoc strategies to design inherently interpretable models and post hoc strategies to analyze and explain black-box models.
- The Galilean Epistemic Perspective: The most nascent and ambitious perspective, it aims for AI to produce mathematical theories with high epistemic transparency. This perspective focuses on generating theories that are intelligible to human scientists, thereby addressing both prediction and understanding.
Necessary Conditions for Machine Theorization
The authors propose three necessary conditions for AI to generate revolutionary mathematical theories:
- Formal Conjecture and Proof: The AI must be capable of conjecturing, deriving, and proving mathematical statements.
- Ontological Manipulation: The AI must be able to represent, combine, and alter ontological concepts from scientific and manifest images.
- Integration of Mathematics and Ontology: The AI must manipulate mathematics to align with ontological concepts to form empirical statements.
These conditions separate symbolic regression—where ontological concepts are pre-specified—from the creation of revolutionary theories, which often involve ontological extrapolation.
Galilean Intelligibility
The paper introduces the concept of Galilean intelligibility, a criterion for evaluating the epistemic transparency of theories produced by AI. A theory is considered Galilean intelligible if it contains at least one empirical statement, with its degree of intelligibility determined by the ratio of the number of empirical constants to the number of ontologically grounded variables. The authors argue that higher intelligibility (fewer empirical constants relative to variables) is desirable, as it ensures greater epistemic transparency.
Implications and Future Developments
The proposed moonshot has significant implications for both theoretical and practical advances in AI and scientific research. If the necessary conditions for machine theorization are fulfilled, AI could potentially contribute transformative insights across scientific disciplines. The authors speculate that future developments may see AI theorists engaging in normal scientific practices, contributing to theoretical advancements and empirical testing, much like human scientists in the post-revolutionary periods described by Kuhn.
The perspective set forth by Kaiser et al. underscores the importance of balancing predictive power and intelligibility in AI-generated scientific theories. It highlights the current limitations and future potential of AI oracles in fostering a deeper understanding of complex scientific phenomena. By addressing the epistemic opacity of AI oracles and aiming for a future where AI can contribute to the formulation of intelligible theories, this paper provides a thoughtful roadmap for the intersection of AI and scientific discovery.