Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 33 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Applications of AAA rational approximation (2510.16237v1)

Published 17 Oct 2025 in math.NA and cs.NA

Abstract: The AAA algorithm for rational approximation is employed to illustrate applications of rational functions all across numerical analysis.

Summary

  • The paper introduces the AAA algorithm that robustly delivers high-accuracy rational approximations through a barycentric representation and a greedy descent strategy.
  • It demonstrates efficient methods for analytic continuation, pole-zero localization, and reliable derivative and integral computations, achieving up to 14 digits of accuracy.
  • The work extends AAA to diverse areas including PDEs, model order reduction, and nonlinear eigenproblems, while outlining 20 open problems for future research.

Applications of AAA Rational Approximation: Theory, Algorithms, and Practice

Introduction and Algorithmic Foundations

The AAA (Adaptive Antoulas-Anderson) algorithm provides a robust, flexible, and highly accurate approach to rational approximation of functions on arbitrary real or complex domains. The central innovation of AAA is its use of a barycentric representation for rational functions, which circumvents the numerical instability inherent in the classical quotient form p(z)/q(z)p(z)/q(z), especially when poles and zeros cluster near singularities. The algorithm proceeds via a greedy descent strategy, adaptively selecting support points to minimize the maximum error over a discrete sample set, and solving a sequence of linearized least-squares problems using the Loewner matrix and SVD.

The barycentric form for a rational function of degree nn is

r(z)=k=0nfkβkztkk=0nβkztk,r(z) = \frac{\sum_{k=0}^n \frac{f_k \beta_k}{z-t_k}}{\sum_{k=0}^n \frac{\beta_k}{z-t_k}},

where {tk}\{t_k\} are support points (nodes), {fk}\{f_k\} are function values at those nodes, and {βk}\{\beta_k\} are barycentric weights. The AAA algorithm adaptively selects the tkt_k by maximizing the current error at each step, and computes the weights via the right singular vector of the Loewner matrix corresponding to the smallest singular value.

The computational complexity is O(mn3)O(m n^3), where mm is the number of sample points and nn the degree, with mm typically scaling with nn for adequate resolution. For n<150n < 150 and m<3000m < 3000, computation is typically sub-second on commodity hardware.

Function Approximation and Analytic Continuation

AAA excels in both offline (library construction) and online (on-the-fly) function approximation. For meromorphic functions sampled at scattered points, AAA can recover values at arbitrary locations with high accuracy, often exceeding 10 digits with moderate sample sizes. For example, approximating f(z)=tan(z)/tan(2)f(z) = \tan(z)/\tan(2) at 50 random points yields 11–14 digits of accuracy at z=2z=2. Figure 1

Figure 1: AAA recovers the value of a meromorphic function at z=2z=2 from 50 scattered samples in the complex plane, achieving 11 digits of accuracy.

For analytic continuation, AAA rational approximants can extend function values far beyond the original data domain, outperforming polynomial approximants, which are limited by the nearest singularity (as dictated by Bernstein ellipses and potential theory). The "one-wavelength principle" is observed: the number of wavelengths over which analytic continuation is successful scales linearly with the number of digits of accuracy in the rational approximation. Figure 2

Figure 2: AAA rational approximation of tanh(z)\tanh(z) on [1,1][-1,1] enables analytic continuation well beyond the interval, unlike polynomial minimax fitting.

Pole and Zero Localization

AAA provides a practical and accurate method for locating poles and zeros of analytic and meromorphic functions from sampled data, generalizing the classical Padé approach to arbitrary sample sets. Poles and zeros are computed a posteriori via a generalized eigenvalue problem derived from the barycentric representation. In practice, poles and zeros within the data cloud are typically accurate to the approximation tolerance (e.g., 13 digits), and those further away degrade gracefully. Figure 3

Figure 3: Poles (red) and zeros (blue) of the AAA approximant to f(z)=tan(z)/tan(2)f(z) = \tan(z)/\tan(2) closely match the true singularities, even outside the sample region.

This capability extends to rootfinding for analytic functions, where AAA-based "proxy rootfinding" can outperform bespoke polynomial-based methods, and to the identification of branch points via rational approximation of the logarithmic derivative.

Derivatives, Integrals, and Quadrature

AAA rational approximants can be differentiated and integrated efficiently. The derivative in barycentric form is computed via divided differences, and for higher derivatives, recursive formulas are available. For integration, conversion to partial fractions enables termwise integration, and for certain classes of integrals (e.g., over the real line), residue calculus applied to the rational approximant yields highly accurate results. Figure 4

Figure 4: Comparison of polynomial and rational approximations for differentiating f(x)=tanh(8x)f(x) = \tanh(8x); the rational approach achieves similar accuracy with an order-of-magnitude lower degree.

AAA-based quadrature formulas are constructed by approximating the Cauchy transform of the weight function, with the quadrature nodes corresponding to the poles of the rational approximant. This approach generalizes classical Gaussian quadrature and enables the construction of optimal quadrature rules for nonstandard domains and weights.

Equispaced Interpolation and Data Imputation

AAA provides a robust solution to the classical Runge and Gibbs phenomena in equispaced interpolation. Unlike polynomials and trigonometric polynomials, which suffer from divergence or endpoint artifacts, AAA rational approximants achieve steady convergence and high accuracy for analytic and smooth functions, even in the presence of missing data (imputation). Figure 5

Figure 5: AAA rational approximation outperforms polynomial and trigonometric interpolation for equispaced data, avoiding the Runge and Gibbs phenomena.

Figure 6

Figure 6: AAA fills in large gaps in analytic data with high accuracy, even when 27.5% of samples are missing.

Inverse Functions and Multivariate Extension

AAA can be used to compute numerical inverses of functions, including those with singularities, by simply swapping the roles of input and output in the sample set. This is particularly effective for functions with ill-conditioned inverses, such as sin(x)\sin(x) or solutions to ODEs. Figure 7

Figure 7: AAA computes the inverse of a numerically solved ODE initial value problem, enabling rapid evaluation of the inverse function.

For multivariate problems, while no direct analog of AAA exists due to the lack of a multivariate barycentric formula, a practical workaround is to apply AAA along one-dimensional slices (e.g., lines or circles), enabling high-accuracy extension and imputation in higher dimensions. Figure 8

Figure 8: AAA-based imputation along horizontal slices reconstructs missing data in a bivariate analytic function.

PDEs, Laplace, and Helmholtz Problems

The AAALS (AAA-least squares) method, introduced by Costa, extends AAA to the solution of Laplace and related PDEs by fitting boundary data with the real part of a rational function whose poles are adaptively chosen outside the domain. This approach achieves high accuracy, even in domains with corners or nonconvexity, where polynomial methods fail due to slow convergence. Figure 9

Figure 9

Figure 9: Poles of the AAA approximant to the Schwarz function of a closed curve cluster near regions of high curvature, revealing the analytic structure of the domain.

For the Helmholtz equation and scattering problems, AAA is used to place singularities (e.g., Hankel functions) optimally, enabling efficient and accurate solution of high-frequency wave problems.

Model Order Reduction, Eigenvalue Problems, and Nonlinear Eigenproblems

In model order reduction (MOR) and reduced order modeling (ROM), AAA is used to approximate high-degree transfer functions by lower-degree rational functions, often achieving 10+ digits of accuracy with a dramatic reduction in degree. The method is competitive with, and often superior to, classical approaches such as vector fitting and the Loewner framework, especially in the SISO case.

For large-scale eigenvalue problems, AAA approximates the scalarized resolvent (zIA)1(zI-A)^{-1}, with the poles of the rational approximant corresponding to eigenvalues of AA within the region of interest. This approach is scalable, parallelizable, and effective for both linear and nonlinear eigenvalue problems, including those arising in photonics and resonance computations. Figure 10

Figure 10

Figure 10: AAA accurately locates poles (resonances) of the Riemann zeta function from sampled data on a line in the complex plane.

Zolotarev Problems, Best Approximation, and Algorithmic Extensions

AAA, together with the AAA-Lawson extension, enables the computation of best (minimax) rational approximations in the \infty-norm, both for real and complex data. This is critical for applications such as digital filter design, spectral slicing, and divide-and-conquer eigenvalue algorithms, where Zolotarev sign and ratio problems arise. The paper introduces modifications to AAA (e.g., the 'sign' and 'damping' options) to address convergence issues in these challenging settings. Figure 11

Figure 11: Error curves for AAA and AAA-Lawson approximations to Γ(z)\Gamma(z) on z=1.5|z|=1.5; AAA-Lawson achieves a near-circular error curve, indicating near-optimality.

Theoretical Insights and Open Problems

The paper provides a comprehensive potential-theoretic framework for understanding the convergence of rational approximations, including the role of Hermite integrals, equilibrium measures, and the classification of poles (approximation, pole, branch cut, and spurious). It highlights the exponential and root-exponential convergence rates achievable by rational functions, in contrast to the algebraic rates of polynomials in the presence of singularities. Figure 12

Figure 12: The "one-wavelength principle" in analytic continuation: AAA achieves approximately one wavelength of accurate extension per 13 digits of accuracy.

A set of 20 open problems is articulated, spanning algorithmic, theoretical, and multivariate challenges, including the development of a multivariate AAA, robust best-approximation algorithms, and the extension of AAA to other function classes (e.g., Gaussians, RBFs).

Conclusion

The AAA algorithm and its extensions have transformed the landscape of rational approximation, providing a practical, reliable, and theoretically grounded tool for a wide range of applications in numerical analysis, scientific computing, and engineering. The method's adaptability, accuracy, and ability to handle singularities and complex geometries position it as a central component in modern computational mathematics. The paper's comprehensive treatment, extensive numerical evidence, and identification of open problems set the stage for further advances in rational approximation theory, algorithms, and applications.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Explain it Like I'm 14

Overview

This paper is about a fast, reliable way to approximate complicated functions using rational functions (fractions of polynomials). The method is called the AAA algorithm. The authors show how AAA makes many tasks in numerical analysis easier, from quickly evaluating special functions to finding hidden features like poles and zeros. Their main goal is to explain what AAA can do, why it works, and where it helps in real problems.

What questions does the paper ask?

The paper explores simple, practical questions like:

  • If we can easily build a rational function that closely matches a given function on a set of points, what new things can we do?
  • How does this compare to traditional polynomial methods (like Taylor series)?
  • How can we make the approximation both fast and accurate?
  • Can we trust the results, and what should we watch out for (like fake poles)?
  • How can we refine the approximation when we need the absolute best accuracy?

How does the AAA algorithm work? (Explained in everyday terms)

Think of AAA as a smart “copycat” system that learns a function by looking at its values and then builds a simple fraction that behaves the same way—almost everywhere you care about.

Here’s the idea in simple steps:

  • You give AAA a bunch of points and the function values at those points (these are samples).
  • AAA builds a rational function in a special format called the barycentric form. This format keeps the math stable and accurate, unlike the more fragile “p(z)/q(z)” format.
  • AAA starts with a very simple approximation and improves it step by step. Each time, it looks at where its current guess is worst and adds a new “support point” there—like zooming in on the problem area.
  • At each step, it solves a well-behaved linear algebra problem (using a tool called SVD, which finds the best fit) to adjust the weights of the support points. These weights tell the approximation how strongly to match the function at each point.
  • It stops when the errors are tiny (usually near machine precision, around 13–15 digits for double precision).

Key words explained simply:

  • Rational function: A fraction of two polynomials, like r(z) = p(z)/q(z).
  • Poles and zeros: Special points where the function blows up (poles) or becomes exactly zero (zeros). These matter because they shape how the function behaves.
  • Barycentric representation: A clever way to write rational functions that avoids numerical instability and loss of accuracy.
  • Greedy algorithm: AAA keeps fixing the worst part first—like patching the biggest hole in a boat before the small leaks.
  • SVD: A method that finds the best way to fit data with minimal error.

There are helpful variations:

  • AAA-Lawson: A refinement step that tweaks the approximation to be closer to the best possible of a given size.
  • Continuum AAA: Works directly on continuous sets (like whole circles or intervals), not just discrete sample points.
  • Symmetry tricks: If the data has complex conjugate symmetry, AAA can add points in conjugate pairs to keep the approximation real-symmetric.

What did the authors find, and why is it important?

Main takeaways:

  • AAA is fast and simple: For moderate problem sizes (say, degree up to ~150), it usually runs in under a second on a laptop.
  • It’s very accurate: Often reaches near machine precision (~13–15 digits).
  • It works in many situations: On intervals, circles, scattered points, real or complex data, including functions with singularities.
  • It can “discover” features: AAA often pinpoints poles and zeros of the target function very accurately—even beyond the sampled region. This helps with understanding complex behavior and extrapolation.
  • Rational beats polynomial in tough cases: When functions have sharp features or singularities, rational functions can approximate them much better and with far lower degrees. For example:
    • If a function has branch points (sharp corners in its analytic structure), rational approximations converge “root-exponentially,” which is much faster than polynomials that only improve slowly.
    • If the function is analytic except for poles, rational approximations can improve super fast, whereas polynomials improve only exponentially.
  • It’s robust but not magic: AAA can sometimes produce tiny “spurious poles” (fake features caused by noise or limitations). There are cleanup tricks, and often these don’t harm accuracy away from the pole.

Examples from the paper (explained simply):

  • Approximating tan(z) from scattered points: AAA got 11–14 correct digits at z=2 and accurately found nearby poles and zeros of tan(z), even those outside the sampled region.
  • Approximating Γ(z) (the gamma function): AAA built very accurate rational copies on lines and circles, useful for fast complex evaluations.
  • Zeta function ζ(z): AAA used data on a vertical line to approximate ζ(z), correctly detecting its pole at z=1 and several zeros along the critical line.

Performance and practicality:

  • Cost grows with the degree (roughly like n4), but for most applications they tried, that’s still fast.
  • There are options to speed parts up or stabilize special cases (scaling columns, using more singular vectors, pairing conjugate points).

What does this mean for the future?

AAA opens a “new era” for rational approximation in numerical analysis:

  • It turns previously hard tasks into almost plug-and-play tools.
  • It’s available in common environments (MATLAB/Chebfun, Python/SciPy, Julia).
  • It helps in building special function libraries, solving differential equations, designing filters, model reduction, and more.
  • It enables better accuracy with smaller models, which saves time and memory.
  • There are still open questions (like guarantees of “near-best” in all cases), and room to make the linear algebra even faster for very large problems.

In short: AAA makes rational functions practical for everyday scientific computing, often beating polynomials when functions have tricky behavior. It’s fast, accurate, and broadly useful—a powerful tool for modern math and engineering.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Knowledge Gaps

Knowledge gaps, limitations, and open questions

Below is a concise list of unresolved issues and open research directions explicitly or implicitly raised by the paper. Each item is formulated to be actionable for future work.

  • Lack of theoretical guarantees of near-bestness: establish conditions under which AAA produces approximants within a provable factor of the degree‑n optimal rational approximation on a continuum domain E (not just accuracy on the discrete sample set Z).
  • Discrete-to-continuum reliability: develop rigorous error bounds connecting convergence on a discrete set Z to uniform convergence on the intended continuum E, including explicit sampling density and distribution requirements to avoid “bad poles.”
  • Control of pole locations: design and analyze constrained AAA schemes that prevent poles in forbidden regions (e.g., inside a real interval), with provable guarantees against spurious or interior poles.
  • Spurious poles and Froissart doublets: provide robust, theoretically justified detection and cleanup procedures (including criteria for residue magnitude and pole–zero proximity), with proofs that cleanup preserves approximation quality.
  • Sampling near singularities: automate and justify exponential clustering of sample points Z near branch points or other singularities to achieve root‑exponential convergence, including adaptive strategies that infer singularity locations during the run.
  • Choice of tolerance and degree: develop principled, adaptive stopping criteria (tol, max degree n) informed by the analytic structure of f and the geometry of E, balancing accuracy, complexity, and risk of ill‑conditioning.
  • Real symmetry enforcement: formalize and analyze symmetry-preserving AAA variants (e.g., conjugate‑pair support point insertion), including proofs of improved accuracy and stability, and generalizations to other symmetries (periodicity, reflection, problem‑specific invariances).
  • Fast and scalable linear algebra: replace the O(n4) cost with provably stable and efficient algorithms (incremental/tall‑skinny SVD updates, randomized sketching) that retain accuracy; resolve stability issues with Gram/Cholesky approaches and quantify performance gains.
  • Ill‑conditioning in the Loewner matrix: provide theoretical guidance for column scaling and for constructing weights from multiple right singular vectors (2025 adjustment), including criteria for when to apply these remedies and bounds on the resulting accuracy.
  • Accuracy of computed poles, zeros, and residues: derive forward error bounds for poles/zeros/residues of the AAA approximant r in terms of Loewner matrix conditioning, sampling configuration, and proximity of poles to sample points.
  • Residue computation methodology: analyze the new (2025) least‑squares residues method vs. the prior algebraic formula, with error bounds, stability proofs, and guidance on when each approach is superior.
  • Continuum AAA maturation: turn continuum AAA into a standard tool with robust software and theory—establish convergence guarantees, pole control, sampling rules for boundary parameterizations, and complexity analyses on common domains.
  • Partial fractions post‑processing: systematize the “acceptable pole” filtering and refitting in partial fractions form, including algorithms to select poles, quantify error changes, and guarantee numerical stability of the refit.
  • AAA-Lawson reliability and speed: improve and analyze AAA‑Lawson (iteratively reweighted least squares) for robustness, especially near machine precision; develop convergence proofs, failure detection, and parameter‑tuning rules (e.g., number of Lawson steps).
  • “Sign” and “damping” enhancements: provide a theoretical foundation and practical guidelines for the ‘sign’ and ‘damping’ features (as in zolo), including when they improve conditioning/accuracy and how to set parameters.
  • Machine precision limitations: characterize when double precision prevents reaching true minimax behavior (e.g., ragged near‑circular error curves) and develop mixed‑precision/extended‑precision frameworks and triggers to switch precision reliably.
  • Guidelines for sample set design: create actionable rules for choosing Z (size m, distribution on E, clustering near singularities, avoidance of near‑pole sample points), with quantifiable impact on accuracy and pole/zero fidelity.
  • Robustness on real intervals: address AAA’s reduced reliability on real intervals (e.g., odd‑degree cases like |x|), by crafting interval‑specific variants or constraints that prevent interior poles while maintaining approximation quality.
  • Extrapolation trustworthiness: quantify how far AAA approximations can be trusted outside the sampled region (analytic continuation), with bounds derived from potential theory/Hermite integrals tied to the chosen Z and the singularity landscape.
  • Comparative benchmarking: perform systematic, application‑specific comparisons between AAA and alternative methods (Padé, Vector Fitting, IRKA, RKFIT, Thiele fractions), including success rates, accuracy, speed, and pole control, across diverse domains.
  • Error metrics beyond ∞‑norm: paper multi‑objective or weighted error criteria (e.g., RMS with constraints on max error) and extend AAA/AAA‑Lawson to optimize such metrics, with guarantees and practical algorithms.
  • Parameter auto‑tuning: develop automated selection of AAA parameters (initial step, support point addition rules, termination, Lawson steps) driven by diagnostics (conditioning, error distribution, symmetry) to improve reliability and ease of use.
  • Theoretical models for support point selection: analyze the greedy choice of support points (maximal error) and relate it to equioscillation/minimax theory, providing conditions under which this heuristic yields near‑optimal error curves.
  • Integration with model reduction and control: clarify when AAA (and its variants) are competitive or complementary to established model reduction tools, providing theory and recipes for incorporating dynamics, stability, and structure preservation.
  • Documentation and standardization across implementations: reconcile differences among MATLAB, Julia, Python implementations (e.g., symmetry, cleanup, residue computation), and publish a reference specification with test suites and reproducibility guidelines.
Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Practical Applications

Immediate Applications

Below is a curated set of practical applications that can be deployed now using the AAA algorithm and its available implementations. Each item notes sectors, likely tools/workflows, and key assumptions or dependencies that affect feasibility.

  • Rational surrogate modeling from scattered samples
    • Sectors: software engineering, scientific computing, applied mathematics
    • Tools/workflows: Chebfun’s aaa.m, SciPy/Python implementations, Julia packages; wrap function evaluations into a compact rational approximant; evaluate at new points with near machine precision
    • Assumptions/dependencies: The target function is meromorphic or well-approximable by rational functions; sample points cover regions of interest without pathological conditioning; set tolerances appropriately (e.g., loosen near singularities)
  • On-the-fly analytic continuation and extrapolation near the data domain
    • Sectors: physics, materials science, signal processing
    • Tools/workflows: Fit AAA from data on an interval, circle, or scattered points; evaluate outside the sampling set to estimate values (e.g., gamma/zeta functions, transfer functions)
    • Assumptions/dependencies: No “bad poles” in the evaluation region; use a posteriori pole/zero inspection; acceptable deviation due to extrapolation errors
  • Pole–zero–residue extraction for black-box functions
    • Sectors: RF/microwave engineering, control systems, power electronics, acoustics
    • Tools/workflows: Compute poles/zeros via generalized eigenvalue problem on AAA barycentric form; validate stability margins, resonance locations, and network synthesis
    • Assumptions/dependencies: Meromorphic behavior near the domain; sufficient sampling density; check for spurious poles/Froissart doublets and filter or refit via partial fractions least-squares
  • Compact storage and transmission of measured or simulated curves
    • Sectors: metrology, materials, energy, manufacturing QA
    • Tools/workflows: Replace large tables with a low-degree barycentric rational model; ship coefficients instead of dense data
    • Assumptions/dependencies: Data quality (noise may induce spurious poles); consider cleanup strategies; maintain metadata about approximation domain/tolerance
  • Fast extension of special-function capabilities in environments lacking complex support
    • Sectors: software libraries, education
    • Tools/workflows: Create rational approximants for functions like Γ(z) or ζ(z) on specified domains; embed approximants in teaching tools or domain-specific apps
    • Assumptions/dependencies: Reliability near machine precision; careful domain selection; ensure reflection formulas or known singularities are respected
  • Rapid resonance identification and spectrum analysis
    • Sectors: NDE (non-destructive evaluation), spectroscopy, structural dynamics
    • Tools/workflows: Fit measured response curves to AAA, extract poles and residues; infer modal characteristics and damping
    • Assumptions/dependencies: Adequate SNR; sample near resonances; validate residues via least-squares fit to mitigate numerical sensitivity
  • System identification and compact transfer-function modeling
    • Sectors: control engineering, robotics, automotive, aerospace
    • Tools/workflows: Integrate AAA as an alternative/complement to Vector Fitting (VF), IRKA, RKFIT; enforce real symmetry by pairing conjugate support points; export models as partial fractions or state-space
    • Assumptions/dependencies: Real-symmetry modifications; sampling strategies aligned with frequency content; handle ill-conditioning via column scaling or multi-vector weight constructions
  • Robust evaluation near branch points and singular boundaries
    • Sectors: computational physics, electromagnetics, computational fluid dynamics
    • Tools/workflows: Use rational fitting for root-exponential convergence near branch points where polynomial methods fail; cluster sampling near singularities; relax tolerances for high-degree fits
    • Assumptions/dependencies: Proper sample clustering near singularities; computational budgets for higher degrees (n ~ O(100–300) can be needed); monitor pole locations
  • Simulation speed-ups via “fit once, evaluate many” surrogate loops
    • Sectors: HPC, digital twin development, parametric studies
    • Tools/workflows: Build AAA surrogates of expensive subroutines; cache and reuse; incorporate pole/zero validation for stability
    • Assumptions/dependencies: Surrogate validity across required parameter ranges; automatic re-fitting when parameter drifts invalidate the model
  • RF/microwave S-parameter compression and modeling
    • Sectors: telecommunications, semiconductors, antenna design
    • Tools/workflows: Use AAA (already in MATLAB RF Toolbox) to fit measured S-parameters; ensure real symmetry; export models for time-domain simulations
    • Assumptions/dependencies: Enforce conjugate support points; verify passivity/causality separately (AAA fits the data but does not enforce those constraints by itself)
  • High-accuracy pole/zero mapping for research in analytic number theory and complex analysis
    • Sectors: academia (mathematics, theoretical physics)
    • Tools/workflows: From line or circle samples of ζ(z) or other complex functions, map poles/zeros and explore behavior on critical lines; compare against known values
    • Assumptions/dependencies: Accurate sampling strategies; potential use of reflection formulas; track approximation error growth away from data domain
  • Teaching and prototyping tool for rational approximation and numerical analysis
    • Sectors: education (undergraduate/graduate)
    • Tools/workflows: Classroom demos using Chebfun/Julia/Python AAA implementations; illustrate barycentric forms, pole/zero computation, and error curves; extend with AAA-Lawson for minimax insights
    • Assumptions/dependencies: Familiarity with numerical linear algebra; manage near-machine-precision effects (jagged error curves)

Long-Term Applications

The following opportunities require further research, scaling, or development (algorithmic, software, or theoretical) before wide deployment.

  • Library-grade minimax rational approximations via AAA-Lawson
    • Sectors: software libraries, scientific computing
    • Tools/workflows: Use AAA as an initialization, then AAA-Lawson (iteratively reweighted least squares) for near-minimax; produce lower-degree/high-accuracy special-function libraries
    • Assumptions/dependencies: AAA-Lawson stability near machine precision; “sign” and “damping” enhancements; robust stopping criteria; careful error certification
  • Continuum AAA for boundary-based approximation and analytic continuation
    • Sectors: computational geometry, PDE boundary value problems, complex analysis
    • Tools/workflows: Fit on curves/regions (intervals, disks, circles, parameterized boundaries) without discrete sampling; integrate with boundary integral solvers
    • Assumptions/dependencies: Mature tooling; parameterization quality; theoretical guarantees linking continuum fits to near-best rational approximants
  • Rational spectral and boundary methods for PDEs/ODEs
    • Sectors: engineering simulation, climate modeling, acoustics/electromagnetics
    • Tools/workflows: Embed AAA-driven rational bases to treat corner singularities/inlets and improve convergence compared to polynomials; automate pole placement via fitting
    • Assumptions/dependencies: Stable differentiation/integration operators in barycentric/partial fractions forms; spurious-pole mitigation; error estimation on complex domains
  • Scalable AAA for large datasets via improved linear algebra
    • Sectors: big data, ML, scientific HPC
    • Tools/workflows: Develop stable O(n3) or randomized/sketched algorithms for the tall-skinny SVD/Loewner matrix; incremental Cholesky/QR updates with robust conditioning
    • Assumptions/dependencies: Numerical stability of sketching/updates; consistent error control; careful treatment of Gram matrix conditioning
  • Standards for rational surrogates in data repositories and policy guidance
    • Sectors: public research agencies, standards bodies, open data policy
    • Tools/workflows: Publish datasets with certified rational surrogates, domains of validity, and error/tolerance metadata; encourage reproducibility and efficient downstream use
    • Assumptions/dependencies: Community consensus on formats and certification; tooling for validation and compliance; clear guidelines for singularity handling
  • Fast pricing and risk analytics in quantitative finance using rational surrogates
    • Sectors: finance
    • Tools/workflows: Approximate characteristic functions or integrands with rational forms; exploit root-exponential convergence near branch cuts for speed; integrate in calibration loops
    • Assumptions/dependencies: Domain-specific validation; robust extrapolation controls; sensitivity to market data noise
  • Model-based diagnostics and control using pole-aware surrogates
    • Sectors: robotics, autonomous systems, industrial control
    • Tools/workflows: Online AAA fits to identify dynamics, then pole/zero monitoring for health checks; adaptive controllers with rational models
    • Assumptions/dependencies: Real-time constraints; stability guarantees; passivity/causality enforcement layers atop AAA approximations
  • Passivity/causality constrained AAA fits for EM and RF design
    • Sectors: telecommunications, radar, IC design
    • Tools/workflows: Combine AAA with constraint-enforcing post-processing (e.g., convex optimization) to guarantee physical realizability of fitted models
    • Assumptions/dependencies: Efficient constrained fitting algorithms; robust error control; certification routines
  • Automated pole cleanup and certification pipelines
    • Sectors: cross-domain numerical modeling
    • Tools/workflows: Standardized workflows that detect Froissart doublets/spurious poles, refit in partial fractions, and certify domain-safe pole locations; integrate in CI/CD for modeling assets
    • Assumptions/dependencies: Reliable detection thresholds; reproducible refitting strategies; error certification on intended domains
  • Multi-function/vector/matrix-valued AAA beyond scalar approximations
    • Sectors: MIMO systems, networked control, multivariate signal processing
    • Tools/workflows: Extend AAA to handle vector/matrix-valued responses, coupled poles/zeros, and multi-input/multi-output identification
    • Assumptions/dependencies: Generalizations of barycentric representations; scalable pole/zero computations for block structures; stability and conditioning analyses
  • Rational compression and analytics for scientific imaging and inverse problems
    • Sectors: healthcare imaging, geophysics
    • Tools/workflows: Employ AAA to compress forward models and kernels; accelerate iterative inversions via surrogate evaluations; use pole-zero analysis to understand artifacts
    • Assumptions/dependencies: Validation against clinical/field datasets; domain-aware sampling; rigorous error bounds for reconstructions
  • Education-to-industry transfer via robust open-source AAA ecosystems
    • Sectors: education, software engineering
    • Tools/workflows: Mature Julia/Python/Matlab AAA packages with documented best practices (real symmetry, Lawson steps, continuum AAA); tutorial curricula; integration examples across sectors
    • Assumptions/dependencies: Community maintenance; cross-language consistency; example-rich documentation; pathways to certification for industrial adoption

These applications leverage the core innovations of AAA: a robust barycentric representation, a greedy descent strategy using Loewner matrices and SVD, optional symmetry and cleanup enhancements, and straightforward pole/zero computation. Feasibility hinges on thoughtful sampling, error tolerance selection, domain-appropriate checks for spurious poles, and—where needed—post-processing (e.g., AAA-Lawson, passivity/causality enforcement, partial-fractions refitting).

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Glossary

  • AAA algorithm: A greedy barycentric method for constructing rational approximations from sampled data. "Such an algorithm, the AAA algorithm, was introduced in 2018 \ccite{aaa}."
  • AAA-Lawson algorithm: An extension of AAA that applies iteratively reweighted least squares (Lawson) steps to approach minimax (best L-infinity) rational approximations. "the extension of AAA known as the AAA-Lawson algorithm \ccite{aaaL}"
  • Backward stability: A numerical property meaning the computed result is the exact solution for slightly perturbed input data. "one can say that they are backward stable in the usual sense of numerical linear algebra,"
  • Barycentric representation: A numerically stable form expressing a rational function as a ratio of weighted sums over support points. "it employs a barycentric representation of a rational function rather than the exponentially unstable quotient representation r(z)=p(z)/q(z)r(z) = p(z)/q(z)."
  • Barycentric weights: The coefficients associated with support points in the barycentric formula. "The numbers {}\{\kern .7pt \} are {\em barycentric weights},"
  • Branch points: Points where a function is multi-valued or has non-isolated singularities, often causing algebraic convergence issues. "If ff has one or more branch points, then both polynomial and rational approximations converge"
  • Chebfun: A software system (primarily MATLAB) for numerical computation with functions via piecewise polynomial/rational representations. "Many of these reliable methods are brought together in the Chebfun software system \ccite{chebfun}."
  • Chebyshev polynomials: Orthogonal polynomials on [-1,1] widely used for stable approximation and interpolation. "often based on Chebyshev polynomials, Chebyshev points and Chebyshev interpolants \ccite{atap}."
  • Cholesky factorisations: Decompositions of positive definite matrices used in linear algebra and fast updates. "(based on updating Cholesky factorisations)"
  • Column scaling: Preconditioning technique that rescales matrix columns to improve conditioning and numerical stability. "Another is to apply column scaling to the matrix AA when it is highly ill-conditioned."
  • Continuum AAA algorithm: A version of AAA that operates directly on continuous domains (e.g., curves) rather than discrete sample sets. "In the continuum AAA algorithm mentioned above \ccite{continuum}, poles are calculated at every step of the iteration."
  • Dirichlet series: A series of the form k=1kz\sum_{k=1}^\infty k^{-z} used to define functions like the Riemann zeta function. "The function is defined by the Dirichlet series"
  • Equioscillation: The alternation of equal-magnitude errors at optimally chosen points, characterizing minimax approximations. "a minimax approximation can be obtained with the Remez algorithm based on equioscillation,"
  • Froissart doublets: Near-cancelling pole-zero pairs that often arise spuriously due to noise or numerical effects. "one also speaks of ``Froissart doublets’’ since these are poles that are paired with zeros"
  • Generalised eigenvalue problem: An eigenproblem of the form Ax=λBxAx=\lambda Bx used here to extract poles and zeros of rational functions. "their zeros can be found by solving the (n+2)×(n+2)(n+2)\times (n+2) generalised eigenvalue problem"
  • Gram matrix: A matrix of inner products whose factorization can be used in fast updates, though with stability caveats. "it is based on the Cholesky factorisation of the Gram matrix, which has stability issues associated with squaring the condition number."
  • Greedy descent algorithm: An iterative method that makes locally optimal choices to reduce error rather than enforcing global optimality. "it is a greedy descent algorithm rather than aiming to enforce optimality conditions"
  • Hermite integrals: Integral representations connected to approximation theory and used to analyze pole/zero localization. "most experts would connect the matter with Hermite integrals and potential theory,"
  • IRKA: The Iterative Rational Krylov Algorithm for model reduction and rational approximation. "\ccite{irka} (``IRKA'')"
  • Loewner matrix: A matrix with entries (fjfk)/(zjtk)(f_j-f_k)/(z_j-t_k) central to linearized rational approximation formulations. "known as a {\em Loewner matrix\/} \ccite{abg}."
  • Meromorphic function: A function that is analytic except at isolated poles. "(A meromorphic function is one that is analytic apart from poles.)"
  • Minimax approximation: Best uniform (infinity-norm) approximation minimizing the maximum absolute error over a set. "rational minimax (best \infty-norm) approximations"
  • Partial fractions form: Representation of a rational function as a sum of simple rational terms with distinct poles. "Both of these expressions are in partial fractions form,"
  • Potential theory: A mathematical framework (harmonic/potential functions) underpinning asymptotic convergence results in approximation. "by arguments of potential theory to be outlined in section"
  • QZ algorithm: The generalized Schur decomposition method for solving generalized eigenvalue problems. "Computing these eigenvalues using the standard QZ algorithm~\ccite{molerstewart} requires O(n3)O(n^3) operations,"
  • Randomised sketching: Techniques using random projections to reduce problem dimension and accelerate linear algebra. "(based on randomised sketching)."
  • Reflection formula: A functional identity relating values of a function at zz and $1-z$ (e.g., for the Riemann zeta function). "where ζ(z)\zeta(z) is evaluated by the reflection formula"
  • Removable singularity: A point at which a function’s expression is singular but the function can be defined to be analytic. "Thus tkt_k^{} is a removable singularity of (\ref{aaa-baryrep}),"
  • Residues: Coefficients of the principal part of a function at its poles, indicating pole strength. "the poles, residues and zeros of the approximation r(z)f(z)r(z)\approx f(z)."
  • Remez algorithm: An algorithm to compute minimax (best uniform) polynomial or rational approximations. "a minimax approximation can be obtained with the Remez algorithm based on equioscillation,"
  • Root-exponential convergence: Error decay of the form exp(Cn)\exp(-C\sqrt{n}) typical for rational approximation near branch singularities. "converge root-exponentially, i.e., at a rate exp(Cn)\exp(-C\sqrt n \kern 1pt)"
  • Runge phenomenon: Instability/oscillation of polynomial interpolation at equispaced points on an interval. "the Runge phenomenon \ccite{atap}: interpolation of all the data does not ensure that anything useful has been achieved."
  • Singular value decomposition (SVD): Matrix factorization A=UΣVA=U\Sigma V^* used to compute barycentric weights in AAA. "computing the singular value decomposition (SVD) of AA"
  • Support points (nodes): Selected sample points where the rational approximant interpolates the data in barycentric form. "The numbers {tk}\{t_k^{}\}, which are n+1n+1 distinct entries of ZZ, are {\em support points} or {\em nodes}."
  • Thiele continued fractions: A continued-fraction representation enabling greedy rational interpolation. "namely Thiele continued fractions \ccite{salazar,driscollzhou,driscolljuliacon}."
  • Vandermonde with Arnoldi orthogonalisation: A stable procedure to build polynomial bases for fitting/interpolation. "(A well-conditioned basis for the polynomial can be constructed by Vandermonde with Arnoldi orthogonalisation~\ccite{VA}.)"
  • Vector fitting: A practical algorithm for rational approximation by fitting frequency-domain data. "\ccite{vf} (``Vector fitting'')"
  • Winding number: The number of times a curve wraps around the origin, used to assess near-optimality of error curves. "the error curve is approximately a circle of winding number 21 \ccite{nearcirc}."
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 3 tweets and received 143 likes.

Upgrade to Pro to view all of the tweets about this paper:

alphaXiv