Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware

Amortized Inference of Neuron Parameters on Analog Neuromorphic Hardware
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Our work utilized a non-sequential simulation-based inference algorithm to provide an amortized neural density estimator, which approximates the posterior distribution for seven parameters of the adaptive exponential integrate-and-fire neuron model of the analog neuromorphic BrainScaleS-2 substrate. We constrained the large parameter space by training a binary classifier to predict parameter combinations yielding observations in regimes of interest, i.e. moderate spike counts. We compared two neural density estimators: one using handcrafted summary statistics and one using a summary network trained in combination with the neural density estimator. The summary network yielded a more focused posterior and generated posterior predictive traces that accurately captured the membrane potential dynamics. When using handcrafted summary statistics, posterior predictive traces match the included features but show deviations in the exact dynamics. The posteriors showed signs of bias and miscalibration but were still able to yield posterior predictive samples that were close to the target observations on which the posteriors were constrained. Our results validate amortized simulation-based inference as a tool for parameterizing analog neuron circuits.


💡 Research Summary

This paper presents a non‑sequential simulation‑based inference (SBI) approach for estimating the parameters of an adaptive exponential integrate‑and‑fire (AdEx) neuron implemented on the analog neuromorphic BrainScaleS‑2 (BSS‑2) platform. While previous work employed the sequential neural posterior estimation (SNPE) algorithm, which yields a posterior only for a single target observation, the authors adopt the amortized Bayesian inference framework BayesFlow. In this setting a neural density estimator (NDE) is trained once on a large simulated dataset and can subsequently provide posterior distributions for arbitrary new observations without retraining.

The authors focus on seven configurable hardware parameters (leak conductance g_l, reset potential V_r, exponential slope ΔT, effective threshold V_T, sub‑threshold adaptation a, spike‑triggered adaptation b, and adaptation conductance g_τw). All other model parameters are fixed. To generate training data, 50 000 random parameter sets were drawn uniformly from the full hardware range (0–1022) and the resulting membrane voltage traces and spike times were recorded. Because most random configurations either produced no spikes or an excessive number of spikes, a binary classifier (a four‑block ResNet) was trained to predict whether a parameter set would yield a moderate spike count (1–70 spikes). The classifier was then used to filter parameter draws, resulting in a curated dataset of 600 000 samples for NDE training and 600 samples for validation.

Two inference pipelines were compared. The first used a set of 12 handcrafted summary statistics derived from spike times and voltage traces (firing rate, latency to first spike, inter‑spike intervals, coefficient of variation, adaptation index, baseline and trough voltages, etc.). The second employed an automatically learned summary network integrated within BayesFlow: two 1‑D convolutional layers followed by a recurrent neural network (128 units) that maps the raw voltage trace to a low‑dimensional summary vector. Both pipelines used coupling‑flow NDEs; the handcrafted‑statistics NDE comprised ten coupling blocks and was trained for 150 epochs, while the summary‑network NDE used eight coupling blocks and was jointly trained with the summary network for 30 epochs.

Results show that both methods recover posterior mass near the true parameter values used to generate a test observation, but the summary‑network approach yields considerably tighter posteriors, especially for parameters such as V_r and the ΔT–V_T pair, which exhibit strong correlation. The spike‑triggered adaptation parameter b remains broadly distributed in both cases. Posterior predictive checks reveal that samples from the handcrafted‑statistics posterior reproduce the overall spike count and adaptation behavior but deviate in the fine structure of the membrane potential: spikes tend to occur earlier and the inter‑spike voltage trajectory is flatter than in the target trace. In contrast, samples from the summary‑network posterior closely match the target voltage dynamics, preserving both the timing and shape of the sub‑threshold fluctuations.

Despite some observed bias and mis‑calibration in the estimated posteriors, the amortized inference framework successfully generates realistic posterior predictive traces that align with experimental observations. This demonstrates that non‑sequential SBI, combined with a learned summary network, is a viable tool for parameterizing analog neuromorphic circuits, even in the presence of hardware‑induced temporal noise and variability. The study highlights the importance of (i) constraining the parameter space via a classifier, (ii) leveraging automatic feature extraction for richer summaries, and (iii) employing amortized inference to enable rapid, on‑the‑fly parameter estimation for a wide range of neuronal recordings. Future work may extend the approach to multi‑neuron networks, incorporate posterior calibration techniques, and explore online inference for real‑time hardware tuning.


Comments & Academic Discussion

Loading comments...

Leave a Comment