Identifying Connectivity Distributions from Neural Dynamics Using Flows

Identifying Connectivity Distributions from Neural Dynamics Using Flows
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Connectivity structure shapes neural computation, but inferring this structure from population recordings is degenerate: multiple connectivity structures can generate identical dynamics. Recent work uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure from observed activity, enabling a mechanistic interpretation of the dynamics. However, standard approaches for training lrRNNs can recover spurious structures irrelevant to the underlying dynamics. We first characterize the identifiability of connectivity structures in lrRNNs and determine conditions under which a unique solution exists. Then, to find such solutions, we develop an inference framework based on maximum entropy and continuous normalizing flows (CNFs), trained via flow matching. Instead of estimating a single connectivity matrix, our method learns the maximally unbiased distribution over connection weights consistent with observed dynamics. This approach captures complex yet necessary distributions such as heavy-tailed connectivity found in empirical data. We validate our method on synthetic datasets with connectivity structures that generate multistable attractors, limit cycles, and ring attractors, and demonstrate its applicability in recordings from rat frontal cortex during decision-making. Our framework shifts circuit inference from recovering connectivity to identifying which connectivity structures are computationally required, and which are artifacts of underconstrained inference.


💡 Research Summary

The paper tackles a fundamental problem in systems neuroscience: inferring the synaptic connectivity that underlies observed population activity. While low‑rank recurrent neural networks (lrRNNs) have become a popular tool for extracting low‑dimensional latent dynamics and a corresponding connectivity matrix from neural recordings, the inverse problem is severely under‑constrained. Many distinct connectivity structures can generate identical dynamics, so standard point‑estimate training of lrRNNs often yields spurious weight patterns that are not required by the data.

The authors first formalize the identifiability issue. In an lrRNN the connectivity matrix J is factorized as J = MNᵀ/K with M,N∈ℝ^{K×R} (R ≪ K). In the limit of infinitely many neurons (K → ∞) the dynamics of the latent variable z∈ℝ^{R} obey a mean‑field equation that depends only on the expectation over the joint distribution p(m,n,b,d) of the rows of M, N, the input‑projection matrix B, and the bias d. Three sources of non‑identifiability emerge: (1) the latent state z is only identifiable up to an invertible linear transformation A, which induces a corresponding transformation of the connectivity distribution; (2) the distribution over (m,b,d) can be recovered only if the latent trajectories span the full R + K_in dimensional space and the input does not lie in that span; (3) the conditional distribution of n given (m,b,d) influences the dynamics only through its first moment µ(m,b,d). Consequently, any distribution sharing the same µ yields identical mean‑field dynamics, leaving higher‑order statistics of n completely unconstrained by the observed activity.

To resolve this degeneracy the authors adopt a maximum‑entropy principle. They formulate an optimization problem that maximizes the differential entropy of p(n|m,b,d) subject to constraints on its mean (µ) and, optionally, its covariance S. Under standard regularity conditions the solution is a Gaussian N(µ,S). Since S cannot be inferred from activity alone, it is treated as a hyper‑parameter or constrained by additional experimental information (e.g., Dale’s law, perturbation data).

The proposed inference framework, named Connector, proceeds in two stages. First, an lrRNN is trained on the neural data using existing methods (e.g., LINT, variational sequential Monte Carlo). The learned rows of M, B, and d are treated as samples from p(m,b,d). A continuous normalizing flow (CNF) is then fitted to these samples via the flow‑matching objective, yielding a flexible density estimator that can capture multimodal, heavy‑tailed, or otherwise complex distributions over the low‑rank factors. Second, given the inferred latent trajectories z₁:T and the estimated M, B, d, the authors derive a linear regression relationship w ≈ αK Nᵀ r, where r = ϕ(h) are the firing‑rate estimates and w is a transformed version of the latent update. Because the activity matrix r is low‑rank, the regression problem is ill‑posed; an ℓ₂ regularization term is added, leading to a closed‑form solution that can be interpreted as a maximum‑a‑posteriori estimate of the conditional mean µ(m,b,d). Directions in the null‑space of r are shrunk to zero, reflecting the fact that they are not constrained by the data.

The authors validate the method on three synthetic families of networks designed to produce (i) multistable attractors, (ii) limit‑cycle oscillations, and (iii) ring attractors. In each case, Connector accurately recovers the underlying distribution over M and N, including heavy‑tailed weight statistics that are invisible to point‑estimate methods. Finally, the framework is applied to electrophysiological recordings from rat frontal cortex during a decision‑making task. While standard lrRNN fitting yields a relatively sparse, near‑Gaussian weight matrix, Connector infers a connectivity distribution with pronounced heavy tails, aligning with recent anatomical surveys that report highly heterogeneous synaptic strengths.

In summary, the paper makes three major contributions: (1) a rigorous analysis of the identifiability limits of connectivity inference in low‑rank recurrent networks; (2) a novel maximum‑entropy, flow‑based density‑estimation framework that learns the full distribution over synaptic weights consistent with observed dynamics; and (3) empirical demonstrations that this distributional view captures biologically realistic connectivity features and avoids spurious structure. By shifting the focus from a single “best‑fit” connectivity matrix to a principled family of admissible circuits, the work opens a new avenue for hypothesis generation and experimental testing in circuit neuroscience.


Comments & Academic Discussion

Loading comments...

Leave a Comment