AAA Algorithm for Rational Approximation
- The AAA algorithm is an adaptive method for rational approximation that leverages a numerically robust barycentric representation with a greedy support point selection strategy.
- It constructs a rational interpolant via iterative least-squares and SVD, effectively mitigating the ill-conditioning seen in classical polynomial methods.
- Its automated, parameter-free workflow and flexible adaptation drive its wide application in control theory, numerical linear algebra, and complex domain approximations.
The AAA (adaptive Antoulas–Anderson) algorithm is an adaptive method for rational approximation on real or complex domains, leveraging a numerically robust barycentric representation and a greedy support-point selection strategy. By iteratively building a rational interpolant that interpolates at selected support points and using a linearized least-squares step to choose barycentric weights, AAA avoids the ill-conditioning common in classical rational or polynomial quotient representations. It is notable for its completely black-box workflow: no user tuning or initial parameter guess is needed. AAA performs well across simple intervals, disks, and highly nontrivial domains (including those with branch points, poles, or nontrivial topology), and often outperforms classical methods including vector fitting and RKFIT. The algorithm's ease of use, flexibility, and robust numerical conditioning have led to widespread application in approximation theory, numerical linear algebra, control theory, and beyond.
1. Barycentric Rational Representation
The AAA algorithm constructs rational approximants in barycentric form. If support points (selected from a sample set ) and associated function values are chosen, together with barycentric weights , the rational approximant is defined by
This form, first used by Antoulas and Anderson in the context of rational interpolation, is preferred due to its excellent numerical conditioning even when support points or poles cluster near singularities. By exploiting cancellation in the numerator and denominator, the barycentric form avoids exponential amplification of roundoff errors that can make polynomial quotient or monomial representations unusable at moderate degrees.
2. Adaptive Greedy Support Point Selection
The key innovation of AAA is adaptive, greedy selection of the interpolation nodes (support points). Rather than prescribing the locations of poles or using a fixed basis, AAA increments the support set one point at a time. At iteration , it computes the current rational approximant and identifies the sample point where the approximation error is maximized. This is added to the support set, and barycentric weights are recomputed.
The weights are obtained by solving a linearized least-squares problem: on each iteration, the so-called Loewner (or divided-difference) matrix is formed with entries
for running over all non-support sample points, and over the support set. The minimal right singular vector of (from the SVD) gives the barycentric weights. This process maintains interpolation at the support points and minimizes the residual in the remaining data, ensuring rapid error convergence and robust conditioning.
3. Implementation and Computational Workflow
A typical AAA implementation entails:
- Initializing with a sample set in the complex plane.
- For :
- Computing the current rational approximant .
- Selecting where is maximal.
- Augmenting the support set with .
- Forming the Loewner matrix and computing new weights by SVD.
- Iterating until the desired accuracy is reached (default tolerance ).
- Optionally, a postprocessing cleanup step eliminates spurious pole-zero pairs (Froissart doublets) introduced by numerical artifacts.
The algorithm is highly efficient. In each iteration, only a thin SVD needs to be computed; the matrices are moderately sized (determined by the number of sample points and support points). The entire process can be implemented in a concise Matlab script (as in Chebfun), and is parameter-free: the user need not specify degree, pole count, or initial guesses.
4. Applications and Variants
The AAA algorithm is tested on a diverse set of applications:
- Approximation of analytic functions on intervals, disks, and circles.
- Handling branch points or logarithmic singularities (e.g., , ).
- Meromorphic function approximation from boundary data.
- Rational approximation on non-circular connected or disconnected domains.
- Classical best-approximation problems (e.g., on ).
- Rational Chebyshev approximation (e.g., on ).
- Control-theoretic transfer functions (e.g., clamped beam models).
The algorithm naturally extends or can be modified to:
- Enforce symmetry or real-valuedness when needed.
- Handle vector-valued or even matrix-valued data (e.g., via block-AAA or set-valued AAA approaches).
- Implement Lawson-refinement (AAA-Lawson) to achieve true minimax (Chebyshev) optimality by iterative reweighted least-squares.
- Incorporate cleanup and pole pruning strategies for improved robustness.
5. Performance and Comparison with Other Algorithms
Empirical benchmarks demonstrate that AAA produces highly accurate (often near-minimax) approximants with fewer support points than classical methods. Its numerical stability is superior due to the adaptive barycentric basis, evidenced by often nearly unit condition numbers of the resulting rational representations (apart from extreme configurations).
Compared to vector fitting (which requires a user-supplied initial guess for pole locations and is sensitive to initial conditions) and RKFIT (rational Krylov-based methods with an orthogonal rational function framework), AAA offers:
- Complete automation—no initial parameter tuning required.
- Robustness across both connected and more exotic domains.
- Rapid convergence and low computational cost.
Furthermore, while vector fitting and RKFIT can be highly effective, their iterative or initialization requirements can cause unpredictable failures or slow convergence in challenging situations. The AAA framework sidesteps these issues with its error-driven, adaptive construction.
6. Theoretical Foundations
The AAA algorithm connects directly to deep results in rational approximation theory:
- The barycentric representation avoids exponential ill-conditioning common to polynomial bases, allowing effective use of high-degree approximants.
- The greedy selection strategy corresponds to adaptive error equidistribution, giving the method a "self-healing" character.
- The SVD-based least-squares step ensures well-posedness and numerical robustness, even for challenging functions with clustered singularities.
The algorithm captures essential features of optimal rational approximation—such as exponential clustering of poles near singularities predicted by Newman's and Saff's theory—without the need for explicit singularity analysis.
7. Practical Implications and Utility
AAA is widely adopted in scientific computing, signal and image processing, system identification, model reduction, control design, conformal mapping, and fast numerical solution of PDEs. Its black-box character, flexibility, and speed enable routine use by practitioners without deep expertise in rational approximation theory.
Limitations include cases where extremely ill-distributed data or pathological singularity structures still yield clusters of spurious Froissart doublets, though these are mitigated in practice by cleanup steps. The method accommodates refinement to minimax accuracy via AAA-Lawson and scales gracefully to large datasets due to the reduced problem sizes in each iteration.
In summary, the AAA algorithm is a practical, reliable, and theoretically sound method for rational approximation, offering significant advances in adaptivity, automation, and numerical stability over traditional methods. It forms the basis for a rapidly growing set of variants and applications across numerical analysis and scientific computing.