Generative Modeling of Neural Dynamics via Latent Stochastic Differential Equations

Generative Modeling of Neural Dynamics via Latent Stochastic Differential Equations
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose a probabilistic framework for developing computational models of biological neural systems. In this framework, physiological recordings are viewed as discrete-time partial observations of an underlying continuous-time stochastic dynamical system which implements computations through its state evolution. To model this dynamical system, we employ a system of coupled stochastic differential equations with differentiable drift and diffusion functions and use variational inference to infer its states and parameters. This formulation enables seamless integration of existing mathematical models in the literature, neural networks, or a hybrid of both to learn and compare different models. We demonstrate this in our framework by developing a generative model that combines coupled oscillators with neural networks to capture latent population dynamics from single-cell recordings. Evaluation across three neuroscience datasets spanning different species, brain regions, and behavioral tasks show that these hybrid models achieve competitive performance in predicting stimulus-evoked neural and behavioral responses compared to sophisticated black-box approaches while requiring an order of magnitude fewer parameters, providing uncertainty estimates, and offering a natural language for interpretation.


💡 Research Summary

The paper introduces a probabilistic framework for modeling neural dynamics using latent stochastic differential equations (SDEs). Physiological recordings (neural activity Y and behavior B) are treated as discrete-time partial observations of an underlying continuous‑time stochastic process x(t). The latent dynamics are defined by an Itô SDE:
 dx(t) = µθ(x(t), u(t)) dt + σθ(x(t), u(t)) dW(t),
where u(t) is a continuous control signal derived from external stimuli V via an input encoder ηθ, and µθ and σθ are differentiable drift and diffusion functions that can be instantiated either as classical mechanistic models (e.g., coupled oscillators) or as neural networks, or as hybrids of both. Observation models λθ and ρθ map the latent state to neural spike counts (often Poisson) and behavioral measurements (often Gaussian).

Because the exact posterior p(x|D) is intractable, the authors employ variational inference with an “augmented SDE” as the approximate posterior. The augmented SDE shares the same diffusion σθ but has its own drift νϕ(·) that depends on the latent state, the control signal, and a context vector c(t) built from the observed data via an observation encoder ξϕ. The initial distribution Q0 is parameterized by neural networks αϕ and βϕ conditioned on a finite set of context samples. Using Girsanov’s theorem, the KL divergence between the prior (generative SDE) and the posterior (augmented SDE) can be expressed as a path‑wise integral, yielding a tractable evidence lower bound (ELBO):

 ELBO = E_{˜x}


Comments & Academic Discussion

Loading comments...

Leave a Comment