Deep Learning and Elicitability for McKean-Vlasov FBSDEs With Common Noise
We present a novel numerical method for solving McKean-Vlasov forward-backward stochastic differential equations (MV-FBSDEs) with common noise, combining Picard iterations, elicitability and deep learning. The key innovation involves elicitability to derive a path-wise loss function, enabling efficient training of neural networks to approximate both the backward process and the conditional expectations arising from common noise - without requiring computationally expensive nested Monte Carlo simulations. The mean-field interaction term is parameterized via a recurrent neural network trained to minimize an elicitable score, while the backward process is approximated through a feedforward network representing the decoupling field. We validate the algorithm on a systemic risk inter-bank borrowing and lending model, where analytical solutions exist, demonstrating accurate recovery of the true solution. We further extend the model to quantile-mediated interactions, showcasing the flexibility of the elicitability framework beyond conditional means or moments. Finally, we apply the method to a non-stationary Aiyagari–Bewley–Huggett economic growth model with endogenous interest rates, illustrating its applicability to complex mean-field games without closed-form solutions.
💡 Research Summary
The paper introduces a novel numerical algorithm for solving McKean‑Vlasov forward‑backward stochastic differential equations (MV‑FBSDEs) that include a common noise component. Traditional approaches to such problems face two major difficulties: (i) the forward and backward components are tightly coupled, and (ii) the mean‑field interaction depends on the conditional law of the forward state given the common noise, which normally requires costly nested Monte Carlo simulations.
The authors overcome these challenges by combining three ingredients: (1) Picard iterations for the forward SDE, (2) the concept of elicitability (score‑based characterization of statistical functionals), and (3) deep neural networks for function approximation. The key insight is that many conditional statistics—conditional expectations, quantiles, or even jointly elicitable risk measures such as VaR/ES—can be expressed as minimizers of a proper scoring function. By embedding this variational formulation into a path‑wise loss, the algorithm can learn the required conditional expectations directly, without any nested simulation.
The algorithm proceeds as follows. The time interval (
Comments & Academic Discussion
Loading comments...
Leave a Comment