Levy process simulation by stochastic step functions
We study a Monte Carlo algorithm for simulation of probability distributions based on stochastic step functions, and compare to the traditional Metropolis/Hastings method. Unlike the latter, the step function algorithm can produce an uncorrelated Markov chain. We apply this method to the simulation of Levy processes, for which simulation of uncorrelated jumps are essential. We perform numerical tests consisting of simulation from probability distributions, as well as simulation of Levy process paths. The Levy processes include a jump-diffusion with a Gaussian Levy measure, as well as jump-diffusion approximations of the infinite activity NIG and CGMY processes. To increase efficiency of the step function method, and to decrease correlations in the Metropolis/Hastings method, we introduce adaptive hybrid algorithms which employ uncorrelated draws from an adaptive discrete distribution defined on a space of subdivisions of the Levy measure space. The nonzero correlations in Metropolis/Hastings simulations result in heavy tails for the Levy process distribution at any fixed time. This problem is eliminated in the step function approach. In each case of the Gaussian, NIG and CGMY processes, we compare the distribution at t=1 with exact results and note the superiority of the step function approach.
💡 Research Summary
This paper introduces a novel Monte‑Carlo technique, the Stochastic Step Function (SF) algorithm, for sampling probability distributions that arise as Lévy measures in jump‑diffusion models. The authors contrast SF with the traditional Metropolis‑Hastings (MH) algorithm, emphasizing that MH inevitably produces correlated samples because of its accept‑reject mechanism. Two sources of correlation are identified: rejection correlation (repeated values when proposals are rejected) and transition correlation (small moves when the proposal distribution is tightly localized). Even when an independent MH scheme is used, a poor proposal leads to high rejection rates and thus strong correlation, which is especially problematic for Lévy processes where many jumps must be generated along a single path.
The SF method constructs a piecewise‑constant càdlàg process Xₜ by assigning a “resting time” τᵢ = ν(sᵢ) to each state sᵢ drawn from a Markov chain with transition density ρ that is independent of the current position (e.g., a uniform distribution over the whole state space). Sampling Xₜ on a uniform time grid Δt yields values distributed exactly according to the normalized Lévy density ν₁ = ν/λ, without any accept‑reject step, thereby eliminating rejection correlation. If ρ is independent, transition correlation also disappears. The only practical limitation is that Δt must be larger than sup ν to avoid repeated values; otherwise the algorithm becomes inefficient. The authors mitigate this by randomizing τᵢ with an exponential distribution of mean ν(sᵢ) or by adaptively subdividing the state space.
To improve efficiency for both MH and SF, the authors propose an adaptive discretization of the real line. The domain Ω is partitioned into a finite collection of intervals {Uᵢ}. An auxiliary discrete measure \tilde{ν} approximates the true interval masses ν(Uᵢ). During simulation, \tilde{ν} is updated using the observed samples, akin to stratified sampling or the VEGAS variance‑reduction scheme. In the Adaptive Independent MH (AIMH) version, \tilde{ν} supplies an independent proposal distribution, which quickly converges toward the target, reducing rejection rates. In the Adaptive SF (ASF) version, \tilde{ν} guides the selection of intervals, after which the exact ν(s) determines the resting time, preserving the correlation‑free property while keeping computational cost low.
The authors evaluate the methods on three Lévy models:
-
Gaussian Lévy measure (finite activity). At t = 1 the exact distribution is normal. SF reproduces the density with negligible bias; MH shows noticeable distortion in the tails due to correlation, and its empirical distribution deviates from the analytical one.
-
Normal Inverse Gaussian (NIG) process approximated by a jump‑diffusion where small jumps are replaced by a Brownian component. The exact marginal at t = 1 has heavy tails with a known asymptotic exponent. SF matches the tail exponent precisely, while MH over‑estimates tail probability because correlated jumps cluster.
-
CGMY process (another infinite‑activity model). Similar results: SF yields accurate marginal densities and tail behavior; MH produces a heavier tail and larger mean‑square error.
Across all experiments, the adaptive hybrid schemes dramatically improve performance. Starting from a uniform \tilde{ν}, after a few thousand samples the AIMH acceptance rate rises from ~0.5 to >0.8, and the KL divergence between the proposal and target drops sharply. The ASF algorithm reduces the required Δt by a factor of two without re‑introducing correlation, leading to a 30 % speed‑up compared with the non‑adaptive SF.
The paper concludes that the SF framework provides a principled way to generate uncorrelated Lévy jumps, a critical requirement for accurate pathwise simulation of jump‑diffusion processes. By coupling SF with an adaptive discretization of the Lévy measure, the method attains both high statistical efficiency and low computational overhead, outperforming traditional MH in both finite‑ and infinite‑activity settings. The authors suggest extensions to multidimensional Lévy processes, time‑varying Lévy measures, and GPU‑accelerated implementations for large‑scale Monte‑Carlo applications such as option pricing and risk management.
Comments & Academic Discussion
Loading comments...
Leave a Comment