Feature selection in simple neurons: how coding depends on spiking dynamics
The relationship between a neuron’s complex inputs and its spiking output defines the neuron’s coding strategy. This is frequently and effectively modeled phenomenologically by one or more linear filters that extract the components of the stimulus that are relevant for triggering spikes, and a nonlinear function that relates stimulus to firing probability. In many sensory systems, these two components of the coding strategy are found to adapt to changes in the statistics of the inputs, in such a way as to improve information transmission. Here, we show for two simple neuron models how feature selectivity as captured by the spike-triggered average depends both on the parameters of the model and on the statistical characteristics of the input.
💡 Research Summary
The paper investigates how a neuron’s feature selection, as captured by the spike‑triggered average (STA), depends on both intrinsic model parameters and the statistical properties of its inputs. Using two canonical spiking neuron models—the leaky integrate‑and‑fire (LIF) and the exponential integrate‑and‑fire (EIF)—the authors systematically vary key dynamical parameters (leak conductance, threshold voltage, reset potential, membrane time constant, and the exponential sharpness Δ_T) and expose the models to Gaussian stochastic currents characterized by mean (μ), variance (σ²), and correlation time (τ_c).
Simulations generate large ensembles of spikes for each parameter set, allowing precise estimation of the STA (the linear filter in a linear‑nonlinear, LN, description) and the nonlinear firing‑probability function. The results reveal several robust trends. Increasing leak conductance shortens the membrane’s effective integration window, causing the STA to concentrate nearer the spike time and its peak to shift forward. Raising the voltage threshold reduces STA amplitude and broadens its temporal extent, reflecting the need for larger input excursions to trigger spikes. Raising the input mean lifts the baseline of the STA and modestly advances the peak, while increasing input variance amplifies the STA peak and makes it more pronounced; in the presence of colored noise with longer τ_c, the STA becomes flatter and spreads over a wider temporal window, indicating a shift of sensitivity from high‑frequency to low‑frequency stimulus components.
The nonlinear function also adapts. Higher input variance expands the saturation region of the nonlinearity and effectively lowers the operational threshold, a strategy that preserves information transmission under noisy conditions. Longer correlation times produce a gentler slope in the nonlinearity, reflecting reduced sensitivity to rapid fluctuations when the stimulus varies slowly.
These findings demonstrate that, even in simple point‑neuron models, the components of the LN coding framework are not static. Both the linear filter (STA) and the static nonlinearity are jointly reshaped by the interplay of intrinsic dynamics and external statistics. This dynamic reconfiguration mirrors adaptive coding observed in sensory systems, where neurons adjust their feature selectivity to maintain efficient information flow across changing environments.
The authors conclude that the classic LN model, which often assumes fixed filters and nonlinearities, must be extended to incorporate stimulus‑dependent adaptation of both elements. Their work provides a quantitative baseline for such extensions and suggests that similar adaptive mechanisms likely operate in more complex, biologically realistic neurons and networks. Future directions include testing these predictions against in‑vivo recordings, exploring multi‑neuron interactions, and integrating synaptic plasticity mechanisms that could further modulate feature selection.
Comments & Academic Discussion
Loading comments...
Leave a Comment