A refined nonlinear least-squares method for the rational approximation problem
The adaptive Antoulas-Anderson (AAA) algorithm for rational approximation is a widely used method for the efficient construction of highly accurate rational approximations to given data. While AAA can often produce rational approximations accurate to any prescribed tolerance, these approximations may have degrees larger than what is actually required to meet the given tolerance. In this work, we consider the adaptive construction of interpolating rational approximations while aiming for the smallest feasible degree to satisfy a given error tolerance. To this end, we introduce refinement approaches to the linear least-squares step of the classical AAA algorithm that aim to minimize the true nonlinear least-squares error with respect to the given data. Furthermore, we theoretically analyze the derived approaches in terms of the corresponding gradients from the resulting minimization problems and use these insights to propose a new greedy framework that ensures monotonic error convergence. Numerical examples from function approximation and model order reduction verify the effectiveness of the proposed algorithm to construct accurate rational approximations of small degrees.
💡 Research Summary
The paper addresses a notable limitation of the Adaptive Antoulas‑Anderson (AAA) algorithm, which is widely used for data‑driven rational approximation. While AAA often reaches a prescribed error tolerance, it can produce rational approximants whose degrees exceed the minimal degree required for that tolerance. This over‑parameterization is problematic in applications such as frequency‑domain model order reduction, where the degree of the rational function directly determines the size of the reduced‑order model.
To remedy this, the authors propose a refined algorithm, termed NL‑AAA, that replaces the linear “Levy” least‑squares step of classic AAA with an iterative solution of the true nonlinear rational least‑squares problem. Two well‑known iterative schemes are employed: the Sanathanan‑Koerner (S‑K) iteration and Whitfield’s iteration. Both methods linearize the problem around the current set of barycentric weights, solve a weighted linear least‑squares problem, and then update the weights. Crucially, the authors derive the Wirtinger gradients of the original nonlinear objective and of the S‑K and Whitfield updates, showing that the latter capture the true gradient direction more accurately than the Levy approximation.
The theoretical contribution consists of three parts. First, the gradient analysis demonstrates that the S‑K and Whitfield updates correspond to a first‑order Taylor expansion of the nonlinear residual, thereby moving the iterate along a descent direction of the genuine objective. Second, the authors prove that, when the support points are selected greedily as in AAA and the weights are normalized to unit ℓ₂ norm, the combined greedy‑plus‑refinement scheme yields a monotone decrease of the overall ℓ₂ error with each added support point. This guarantees that the error never increases as the degree grows, a property not ensured by the original AAA. Third, they show that the refinement steps preserve the structure of the Cauchy matrix arising from the barycentric representation, allowing the use of fast singular‑value decompositions without additional computational burden.
Algorithmically, NL‑AAA retains the original AAA framework for support‑point selection: at each iteration the data point with the largest current residual is added to the support set. After updating the support set, instead of solving the homogeneous linear least‑squares problem (the Levy step), the algorithm performs a few (typically 3–5) S‑K or Whitfield iterations to obtain refined barycentric weights. The weights are then renormalized, and the process repeats until the prescribed tolerance is met or a maximum degree is reached. Because the refinement operates on the same Cauchy matrix, the extra cost is modest; the overall complexity remains comparable to classic AAA, even for problems with tens of thousands of samples.
The authors validate NL‑AAA on two families of test problems. In the first set, they approximate the absolute‑value function |x| and the rectified linear unit (ReLU) on the interval
Comments & Academic Discussion
Loading comments...
Leave a Comment