Calculating principal eigen-functions of non-negative integral kernels: particle approximations and applications
Often in applications such as rare events estimation or optimal control it is required that one calculates the principal eigen-function and eigen-value of a non-negative integral kernel. Except in the finite-dimensional case, usually neither the principal eigen-function nor the eigen-value can be computed exactly. In this paper, we develop numerical approximations for these quantities. We show how a generic interacting particle algorithm can be used to deliver numerical approximations of the eigen-quantities and the associated so-called “twisted” Markov kernel as well as how these approximations are relevant to the aforementioned applications. In addition, we study a collection of random integral operators underlying the algorithm, address some of their mean and path-wise properties, and obtain $L_{r}$ error estimates. Finally, numerical examples are provided in the context of importance sampling for computing tail probabilities of Markov chains and computing value functions for a class of stochastic optimal control problems.
💡 Research Summary
The paper addresses the problem of computing the principal eigen‑function φ and eigen‑value λ of a non‑negative integral kernel Q, a task that is analytically tractable only in finite‑dimensional settings. In many applications—most notably rare‑event probability estimation and stochastic optimal control—one needs accurate approximations of these spectral quantities but cannot obtain them in closed form. The authors propose a generic interacting particle algorithm that simultaneously approximates φ, λ, and the associated “twisted” Markov kernel Q^φ, which is defined by Q^φ(x,dy)=Q(x,dy)φ(y)/(λφ(x)).
The algorithm proceeds as follows. A population of N particles is initialized from a prescribed distribution μ₀. At each iteration k the particles are propagated using the original kernel Q, producing candidate locations Y_{k+1}^{(i)}. A weight function G_k(x)=Qφ(x)/φ(x) is evaluated at each candidate; these weights are normalized and used to resample the particle set, thereby biasing the empirical distribution toward regions where φ is large. The normalizing constants generated during resampling provide an on‑the‑fly estimate of λ, while the resampled particles constitute a Monte‑Carlo representation of the twisted kernel Q^φ.
The authors develop two complementary theoretical analyses. First, a mean‑field (propagation‑of‑chaos) result shows that, as N→∞, the empirical measure converges in probability to the deterministic measure obtained by repeatedly applying Q to μ₀. Second, a pathwise L_r error bound is derived for any r≥1, establishing that the deviation between the particle approximation and the true quantities decays at the standard Monte‑Carlo rate O(N^{-1/2}). The proof relies on conditional independence of particles given the past, moment bounds on G_k, and martingale concentration inequalities. Importantly, the error bound holds uniformly over a finite time horizon, which is essential for practical implementations.
Two concrete applications illustrate the methodology. In the rare‑event setting, the goal is to estimate a tail probability P_μ{X_T∈A} for a Markov chain with transition kernel Q. By employing the twisted kernel Q^φ, the algorithm performs importance sampling that concentrates trajectories in the rare set A, dramatically reducing variance compared with naïve Monte‑Carlo or static importance sampling schemes. Numerical experiments on a high‑dimensional random walk confirm the theoretical variance reduction.
In the stochastic optimal control context, the value function V solves a Hamilton–Jacobi–Bellman equation. By exponentiating (or otherwise transforming) V into h(x)=exp(−θV(x)), one obtains a positive eigen‑function of a suitably defined kernel. The particle algorithm then yields an approximation of h and the associated eigen‑value, which can be back‑transformed to recover an approximation of V and the optimal feedback policy. The authors test this approach on linear‑quadratic and nonlinear control problems, showing that the particle‑based value estimates converge to the true values as N increases, and that the derived policies achieve near‑optimal performance.
The experimental section also investigates the impact of different resampling schemes (multinomial, systematic, stratified) and resampling frequencies on stability and bias. Results indicate that systematic resampling with moderate frequency offers a good trade‑off between variance reduction and computational overhead.
Overall, the paper makes several notable contributions. It establishes a unified particle‑filter framework for approximating principal eigen‑functions of non‑negative integral operators, links the eigen‑value estimation to the normalizing constants of the particle system, and provides rigorous L_r error guarantees that match classical Sequential Monte Carlo theory. By demonstrating the approach on both rare‑event simulation and stochastic control, the authors show that the method is broadly applicable and often outperforms existing techniques. The work therefore opens a practical pathway for tackling spectral problems in infinite‑dimensional settings where analytical solutions are unavailable.
Comments & Academic Discussion
Loading comments...
Leave a Comment