Parameterized Uniform Complexity in Numerics: from Smooth to Analytic, from NP-hard to Polytime
The synthesis of classical Computational Complexity Theory with Recursive Analysis provides a quantitative foundation to reliable numerics. Here the operators of maximization, integration, and solving ordinary differential equations are known to map (even high-order differentiable) polynomial-time computable functions to instances which are `hard’ for classical complexity classes NP, #P, and CH; but, restricted to analytic functions, map polynomial-time computable ones to polynomial-time computable ones – non-uniformly! We investigate the uniform parameterized complexity of the above operators in the setting of Weihrauch’s TTE and its second-order extension due to Kawamura&Cook (2010). That is, we explore which (both continuous and discrete, first and second order) information and parameters on some given f is sufficient to obtain similar data on Max(f) and int(f); and within what running time, in terms of these parameters and the guaranteed output precision 2^(-n). It turns out that Gevrey’s hierarchy of functions climbing from analytic to smooth corresponds to the computational complexity of maximization growing from polytime to NP-hard. Proof techniques involve mainly the Theory of (discrete) Computation, Hard Analysis, and Information-Based Complexity.
💡 Research Summary
The paper investigates the computational complexity of three fundamental operators on real‑valued functions—maximisation (Max), definite integration (∫), and the solution of ordinary differential equations (ODE)—within a uniform, parameterised framework. Classical results in recursive analysis tell us that, when the input function f is merely polynomial‑time computable and sufficiently smooth (e.g., C^k for large k), these operators can encode hard discrete problems: Max can be reduced to SAT, integration to #SAT, and ODE solving to problems complete for the polynomial‑time hierarchy. Consequently, the operators are NP‑hard, #P‑hard, or CH‑hard in the worst case.
On the other hand, if f is analytic, non‑uniform constructions show that each operator can be computed in polynomial time. The novelty of this work lies in moving from such non‑uniform existence results to a fully uniform analysis based on Weihrauch’s Type‑2 Theory of Effectivity (TTE) and the second‑order extension introduced by Kawamura and Cook (2010). In this setting, an algorithm’s input consists not only of the desired output precision 2⁻ⁿ but also of explicit “information parameters” describing what discrete and continuous data about f are available (e.g., values at rational points, a finite set of Taylor coefficients, bounds on derivatives).
The authors introduce Gevrey classes G^α (α ≥ 1) as a quantitative bridge between analyticity (α = 1) and arbitrary smoothness (α > 1). A function belongs to G^α if its k‑th derivative satisfies |f^{(k)}(x)| ≤ C·R⁻ᵏ·k!^α for all k, with constants C and R that are part of the input parameters. By varying α they obtain a hierarchy of function spaces whose computational properties change dramatically.
The main technical contributions are:
-
Uniform Upper Bounds for Analytic Functions (α = 1).
Using the representation of analytic functions by rapidly converging power series, the authors design algorithms that, given C, R, and a precision n, compute Max(f), ∫f, and the solution of an ODE with right‑hand side f in time polynomial in n, log C, and log R. The algorithms rely on adaptive interval subdivision for maximisation, Gauss‑Legendre quadrature with error control for integration, and Picard iteration with certified error bounds for ODE solving. -
Hardness Escalation for Slightly Non‑Analytic Functions (α > 1).
For any fixed ε > 0, they construct families of functions in G^{1+ε} that embed Boolean formulas as narrow “spikes”. The location of each spike encodes a variable assignment, while the spike height encodes clause satisfaction. By carefully controlling the Gevrey constants, the constructed functions remain within the prescribed class. This yields polynomial‑time many‑one reductions from SAT to Max, from #SAT to integration, and from Σ_k^P‑complete problems to ODE solving, establishing NP‑hardness, #P‑hardness, and CH‑hardness respectively. -
Parameterised Information‑Based Complexity.
The paper analyses which pieces of information about f are sufficient to break the hardness barrier. It shows that providing a finite number of Taylor coefficients up to order K ≈ poly(n) together with a bound on the Gevrey constant reduces Max to polynomial time even for α = 1+ε, whereas only pointwise evaluations are insufficient. This yields a precise trade‑off: the richer the supplied discrete data, the lower the required computational resources. -
Algorithmic Framework in the Weihrauch/Kawamura‑Cook Model.
All algorithms are expressed as Weihrauch reductions, making the dependence on the representation of f explicit. The authors prove that the maximisation operator restricted to G^α is Weihrauch‑equivalent to the closed choice operator on the reals when α = 1, but becomes Weihrauch‑hard for α > 1, mirroring the classical complexity jump.
Beyond the theoretical results, the authors discuss practical implications for numerical software. They argue that a pre‑processing step that estimates or enforces analyticity (e.g., by fitting a rational or exponential model) can dramatically improve the guaranteed runtime of optimisation, integration, and ODE solvers. Conversely, when only smoothness guarantees are available, one should expect worst‑case exponential behaviour unless additional discrete information (such as derivative bounds or spectral coefficients) is supplied.
The paper concludes with several avenues for future work: extending the analysis to multivariate functions, exploring other smoothness hierarchies (e.g., ultradifferentiable or quasianalytic classes), and implementing experimental benchmarks that compare the theoretical parameter bounds with observed performance in high‑precision libraries such as Arb or MPFR.
In summary, the work establishes a clear correspondence between the Gevrey smoothness index of an input function and the uniform computational complexity of fundamental numerical operators, thereby unifying classical complexity theory, recursive analysis, and information‑based complexity within a rigorous, second‑order computability framework.