Sciences of the Artificial
- Sciences of the Artificial is a framework that rigorously examines intentionally designed systems, focusing on their construction, evaluation, and iterative improvement.
- It integrates design theory, evolutionary computation, and AI to systematically engineer artifacts ranging from algorithms to organizational processes.
- The approach leverages universal Darwinism and bio-inspired architectures to optimize creativity, scalability, and the automation of scientific discovery.
The Sciences of the Artificial concern the systematic understanding and principled design of man-made, engineered systems—artifacts whose structure and function arise from intentional construction rather than from natural evolution or emergence. Originating in Herbert A. Simon’s seminal 1969 work, this framework provides a rigorous theoretical and methodological foundation for treating artifacts—ranging from algorithms and engineered biological systems to organizational processes—as scientific objects. In contrast to the natural sciences, which seek to explain phenomena found in nature, the Sciences of the Artificial address how artificial systems are conceived, constructed, studied, and improved, drawing on insights from design theory, evolutionary computation, cognitive science, philosophy of science, and machine intelligence.
1. Foundations: Simon’s Paradigm and the Science of Design
Herbert A. Simon’s "Sciences of the Artificial" set out to place the analysis and synthesis of artifacts—their inner mechanisms, design processes, and conceptual underpinnings—on a similarly rigorous footing as the study of natural systems. Simon emphasized that artifacts are characterized by the interaction between an inner environment (the organization of the artifact itself, including subsystems and logical structures) and an outer environment (the context in which the artifact must operate) (Nielson et al., 2021). In this paradigm, scientific inquiry encompasses not only how artifacts perform and why they function but also the generation, evaluation, and iterative improvement of novel designs.
Simon’s classic “problem-solving” model formalizes design as a search within a pre-given space of alternatives, subject to constraints and guided by criteria for "goodness." Decision and planning are structured as optimization in a well-defined domain:
where is the set of alternatives, are decision variables, is a set of evaluations, are preferences, utility functions, and aggregation operators (Kazakci, 2014). However, this paradigm largely presupposes fixed goals and alternatives.
2. Creativity, Conceptive Intelligence, and the Brouwer Machine
Extensions and critiques of Simon’s model highlight that genuine artificial intelligence and design involve not just the optimization among known alternatives but also the invention of new categories, goals, and implementation methods. Kazakçı’s “Conceptive Artificial Intelligence” interrogates this gap, defining conceptive intelligence as the capacity to generate new object types and to devise means for their realization—modeling design as the continuous expansion of the universe of possible artifacts (Kazakci, 2014).
The Brouwer machine provides a formal framework for this process with two generative spaces:
- Type-space: (primitive properties) and (library of concepts, each )
- Method-space: (primitive actions) and (library of construction methods, each is a sequence from )
At each iteration, a conceptive agent alternates between:
- Creation of a new concept
- Construction of a new method
This dual generativity—imaginative construction of "what" and "how"—contrasts with most classic AI paradigms wherein only the "how" varies within a fixed "what." The combinatorial explosion in both spaces necessitates novelty-biased search and meta-heuristics (Kazakci, 2014).
3. Universal Darwinism and Evolutionary Foundations
A recurring explanatory structure within the Sciences of the Artificial is universal Darwinism: the idea that knowledge and functionality advance by iterative cycles of variation, selection, and retention, analogous to evolution by natural selection (Nielson et al., 2021). This framework provides an alternative to strict induction, addressing the limitations of Bayesian and frequentist approaches that presuppose computable priors or exhaustive model classes.
A minimal Darwinian meta-algorithm is formally stated as:
- Define the optimization problem.
- Generate variants via mutation, recombination, or systematic transformation.
- Evaluate (fitness function).
- Select best variants .
- Iterate, seeding further variation from .
This abstract scheme subsumes genetic algorithms, evolutionary strategies, neural architecture search, hyperparameter optimization, and even the nested search inherent to gradient descent in deep learning (outer-loop: initialization/hyperparameters, inner-loop: parameter updates minimizing loss) (Nielson et al., 2021). Crucially, these processes are not strictly inductive—they do not infer global structure from static data, but instead, search adaptively through trial and error across a landscape of possibilities.
4. Bio-Inspired Architectures and Hierarchical Organization
Biological systems suggest three foundational design strategies for scalable intelligence: evolutionary tinkering, contextual/multiscale information processing, and hierarchical modularity (Dehghani, 2017).
- Evolutionary tinkering is modeled as a stochastic iterative process modifying and recombining modules ().
- Contextual processing involves bidirectional causality between microstates () and macrostates (), governed by:
- Modularity and hierarchy are enforced by trade-offs in scaling and stability (), favoring decomposable, layered architectures.
The requirements for AGI distilled from biological precedent include: requisite variety (), multiscale feedback, local trial-and-error optimization, modular composition, and thermodynamic efficiency (Dehghani, 2017). These criteria inform not only software architectures but also hardware and neuromorphic design, producing artificial agents with resilience and adaptability.
5. Artificiality in Social Sciences and Complex Systems
The sciences of the artificial also provide tools for analysis and experimentation in domains beyond the traditional engineering of machines. In social sciences, artificiality and simulation allow for the construction of artificial societies, modeling social phenomena via agent-based systems. Increased computational power has enabled social scientists to formalize, simulate, and experiment with models previously accessible only in "hard" sciences [0701087]. This approach recasts social science as an experimental discipline, allowing for synthesis of new theoretical frameworks for sociality that mirror the bottom-up compositionality seen in other domains of the artificial.
6. AI-as-Exploration: Navigating Intelligence Space
“AI-as-exploration” reframes the sciences of the artificial as a programme for systematically discovering the building blocks of intelligence, unconstrained by anthropocentrism or biocentrism. Intelligence space is conceptualized as the space of all possible intelligent systems, each defined by behavioral domain, implementation, computational procedure, representational scheme, and embodiment (Mollo, 2024).
The methodology involves:
- Identifying a regime in .
- Constructing or adapting artificial systems (e.g., Transformers) inhabiting that regime.
- Probing performance on behavioral benchmarks (e.g., novel concept combination tasks).
- Conducting mechanistic analysis to map computational motifs.
- Comparing and contrasting with biological solutions.
Case studies demonstrate that LLMs may display human-level performance in combinatorial concept tasks via fundamentally non-symbolic, high-dimensional vector merging, rather than schema-like inference. This divergence exemplifies how AI can inhabit and map previously unexplored regions of intelligence space, yielding new representational and computational primitives (Mollo, 2024).
7. Robotic Automation of Scientific Discovery
Robotic scientists (AI-robotic systems capable of fully automated scientific experimentation) embody the sciences of the artificial at the ecosystem level of scientific practice. These systems integrate:
- AI discovery agents for hypothesis generation and model evaluation,
- Experiment designers mapping hypotheses to actionable laboratory tasks,
- Robotic platforms executing experimental protocols,
- Automated data analysis,
- Structured knowledge bases encoding ontology and metadata (Gower et al., 2024).
Their operation is structured as a closed-loop unifying deduction, induction, and abduction, coupled to active learning strategies that minimize model uncertainty:
Recent systems such as Genesis deploy 1,000-plex microfluidic reactors with logical models and automated hypothesis ranking, massively accelerating experimental throughput and precision compared to human-driven science. This architecture operationalizes key scientific values—parsimony, repeatability, modularity—in automatable form, suggesting that the sciences of the artificial now encompass not only the objects produced but the very process of knowledge creation itself (Gower et al., 2024).
In summary, the Sciences of the Artificial furnish a comprehensive and evolving framework for understanding, designing, and analyzing artifacts—spanning the generation of novelty in design, the Darwinian mechanisms of improvement, the biological principles underlying scalable systems, the mapping of intelligence in abstract space, and the automation of scientific practice. This convergence enables both the systematic expansion of possible artifacts and ongoing theoretical refinement of what it means to build and know in artificial domains.