Papers
Topics
Authors
Recent
Search
2000 character limit reached

AI Feynman: Physics-Inspired Regression

Updated 3 May 2026
  • AI Feynman is a symbolic regression algorithm that employs physics-inspired methods to recover interpretable closed-form formulas from numerical data.
  • It combines neural network interpolation, dimensional analysis, and combinatorial search to detect symmetries and separability in physical laws.
  • The method outperforms traditional tools on benchmark physics equations and extends to dynamical systems modeling with its recursive, modular framework.

AI Feynman is a modular, physics-inspired symbolic regression algorithm designed to discover closed-form analytic expressions from numerical data, focusing on functions arising in the physical sciences. It leverages a combination of neural network interpolation, dimensional analysis, combinatorial search, and symmetry/separability detection to efficiently recover interpretable formulas from data that exhibit the structural properties common to physical laws (Udrescu et al., 2019).

1. Problem Definition and Motivation

Symbolic regression seeks an analytic function y=f(x1,,xn)y = f(x_1,\ldots,x_n) given numerical pairs {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\} generated by an unknown ff. In general, this is an NP-hard problem due to the combinatorial explosion in possible functional forms. In physical contexts, however, the target functions tend to exhibit exploitable structure: dimensional homogeneity (“units”), smoothness, additive or multiplicative separability, translational or scaling symmetries, and a preference for low-order polynomials or compositions of elementary functions. AI Feynman targets this structured regime by recursively decomposing complex regression tasks through the exploitation of these properties, thereby making symbolic regression practical for many physics problems (Udrescu et al., 2019).

2. Recursive Physics-Inspired Methodology

AI Feynman recursively applies a sequence of modules to transform, simplify, or factor the regression problem, each time reducing its dimensionality or complexity. The main loop is organized as follows:

  • A. Dimensional Analysis: Variables' units are represented as integer vectors. Linear algebra (solving Mp=bM p = b, MU=0M U = 0) identifies a monomial prefactor yy^*, rendering the target dimensionless and reducing the effective input space.
  • B. Low-Order Polynomial Fit: Fits the data with polynomials up to degree 4. If the root-mean-squared error (RMS) is below threshold (ϵp104\epsilon_p \approx 10^{-4}), returns the fit.
  • C. Brute-Force Symbolic Search: Enumerates expressions in reverse-Polish notation over a fixed symbol set and evaluates them using description length as the scoring metric:

DL=log2N+λlog2[max(1,ϵ/ϵ0)]DL = \log_2 N + \lambda \log_2[\max(1, \epsilon/\epsilon_0)]

where NN indicates formula rank in the enumeration, ϵ\epsilon is the fit error, and {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}0 a small reference value. Candidates with minimized description length (under a threshold) are returned.

  • D. Neural-Network-Based Simplification: When brute force fails, a feed-forward neural network (NN) interpolator is trained to approximate the function. This surrogate is probed for symmetries (translation, scaling), additive/multiplicative separability, and variable equality by evaluating the change in prediction under corresponding input transformations. Successful detection enables problem decomposition or variable replacement, reducing dimensionality.
  • E. Extra Transformations: Candidate transformations (square root, square, logarithm, exponential, trigonometric, etc.) are systematically applied to inputs/outputs to promote simpler formulas.
  • F. Recursion: Whenever the problem is decomposed (e.g., via separability or dimension reduction), AI Feynman is launched recursively on subproblems until the full formula is reconstructed by inverting applied transformations.
  • G. Stopping Criteria: Any module returning a formula with RMS error below its module-specific threshold halts recursion. Global time and expression-length limits for brute-force search ensure practical compute bounds (Udrescu et al., 2019).

3. Neural Network Integration and Implementation Details

AI Feynman employs neural networks exclusively as flexible, smooth interpolators for probing function structure—never as the final symbolic output. Architecturally, it uses a 6-layer feed-forward network (first 3 layers with 128 units, last 3 with 64), softplus activations, trained with Adam optimizer (weight decay {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}1, superconvergence learning-rate schedule), and evaluated with RMS error on {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}2 data points (80/20 train/validation split). The typical NN validation error {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}3 is {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}4 to {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}5 on clean data. Detection thresholds for symmetries, separability, and variable equality scale with {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}6 (with {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}7 and {(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}8). Poor NN fits ({(x1,,xn,y)}\{(x_1,\ldots,x_n,y)\}9) can limit the effectiveness of structure detection (Udrescu et al., 2019).

4. Algorithmic Pipeline and Pseudocode

A condensed pseudocode representation clarifies the recursive, modular nature of AI Feynman's workflow:

ff5 This recursive design enables systematic reduction and efficient solution of high-dimensional, structured regression tasks (Udrescu et al., 2019).

5. Benchmark Performance and Quantitative Results

AI Feynman demonstrates marked improvements over previous symbolic regression tools. On the canonical set of 100 physics equations from the Feynman Lectures, AI Feynman recovers all 100, compared to 71/100 for Eureqa. On a set of 20 “hard” physics expressions, AI Feynman's success rate is 90%, compared to 15% for Eureqa (measured with a 2-hour CPU time limit per equation). Solve times range from ff0 seconds for simple formulas to ff1 seconds for the most complex cases (Udrescu et al., 2019). Summary statistics:

Dataset Eureqa Success AI Feynman Success
100 Feynman Lectures 71% 100%
20 Bonus (hard) 15% 90%

(Udrescu et al., 2019)

6. Extensions to Dynamical Systems

An adaptation, termed "DynAIFeynman" [Editor's term], extends the framework to inferring ordinary differential equation (ODE) structures from time-series. The problem is recast as discovering ff2 from discrete trajectory measurements, with inputs ff3 and noisy derivatives ff4 estimated by finite differences. The pipeline mirrors that of AI Feynman: neural network surrogates identify patterns, separability, and symmetries; low-degree polynomial modules are emphasized due to the frequently simple time dependencies; complexity control is made more aggressive; and description length is explicitly penalized. In comparative experiments on Lotka–Volterra, simple pendulum, and Cart–Pole systems, DynAIFeynman outperformed both a grammar-based genetic algorithm and SINDy sparse regression on most tasks, especially on low- and intermediate-complexity vector fields. For example, for Lotka–Volterra,

Method Lotka–Volterra RMSE
DynAIFeynman 0.19 ± 0.05
GA-Baseline 2.13 ± 1.11
SINDy 0.24

Challenges remain, particularly for high-dimensional systems or those with nested rational/trigonometric forms; nonetheless, DynAIFeynman systematically recovers leading-order dependencies (e.g., trigonometric terms in the pendulum, bilinearities in predator–prey models) (Weilbach et al., 2021).

7. Limitations and Future Directions

The primary limitation of AI Feynman is computational bottlenecks in the brute-force search for long/nested expressions—these cases are limited by expression length cutoffs or timeouts. Additionally, non-negligible neural network fitting error can prevent detection of subtle structural properties such as weak separability. Hyperparameters (error thresholds, regularization coefficients, symbol sets) require careful domain-specific tuning. The method does not natively handle formulae involving integrals or derivatives unless extended input representations are introduced.

Proposed extensions include expanding the set of robust transformations (e.g., tanh, piecewise), incorporating learnable numerical constants into the grammar as in Eureqa, employing hybrid genetic search within factored problems, and advancing neural architectures to further reduce interpolation error floors. For dynamical settings, accurate derivative estimation (e.g., via total-variation regularization), residual modeling, and Bayesian model selection along the complexity–error Pareto front are active areas of research (Udrescu et al., 2019, Weilbach et al., 2021).

A plausible implication is that, by exploiting the physics-inspired simplifying structures present in many real-world problems, AI Feynman and its extensions can make symbolic regression a viable practical tool for interpretable model discovery in a broad array of scientific domains.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AIFeynman.