Parameter and hidden-state inference in mean-field models from partial observations of finite-size neural networks

Parameter and hidden-state inference in mean-field models from partial observations of finite-size neural networks
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We study large but finite neural networks that, in the thermodynamic limit, admit an exact low-dimensional mean-field description. We assume that the governing mean-field equations describing macroscopic quantities such as the mean firing rate or mean membrane potential are known, while their parameters are not. Moreover, only a single scalar macroscopic observable from the finite network is assumed to be measurable. Using time-series data of this observable, we infer the unknown parameters of the mean-field equations and reconstruct the dynamics of unobserved (hidden) macroscopic variables. Parameter estimation is carried out using the differential evolution algorithm. To remove the dependence of the loss function on the unknown initial conditions of the hidden variables, we synchronize the mean-field model with the finite network throughout the optimization process. We demonstrate the methodology on two networks of quadratic integrate-and-fire neurons: one exhibiting periodic collective oscillations and another displaying chaotic collective dynamics. In both cases, the parameters are recovered with relative errors below $1%$ for network sizes exceeding 1000 neurons.


💡 Research Summary

This paper addresses the inverse problem of estimating the parameters of an exact low‑dimensional mean‑field (MF) description of a large but finite neural network when only a single scalar macroscopic observable is available. The authors assume that the MF equations governing macroscopic quantities (e.g., mean firing rate or mean membrane potential) are known analytically, but the parameters entering these equations are unknown. Moreover, only one component of the macroscopic state vector—denoted X_out(t)—can be measured from the finite‑size network; the remaining components are hidden.

The central difficulty lies in the loss function L(P,X₀)=½M⁻¹∑


Comments & Academic Discussion

Loading comments...

Leave a Comment