The Intrinsic Properties of Brain Based on the Network Structure

The Intrinsic Properties of Brain Based on the Network Structure
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Objective: Brain is a fantastic organ that helps creature adapting to the environment. Network is the most essential structure of brain, but the capability of a simple network is still not very clear. In this study, we try to expound some brain functions only by the network property. Methods: Every network can be equivalent to a simplified network, which is expressed by an equation set. The dynamic of the equation set can be described by some basic equations, which is based on the mathematical derivation. Results (1) In a closed network, the stability is based on the excitatory/inhibitory synapse proportion. Spike probabilities in the assembly can meet the solution of a nonlinear equation set. (2) Network activity can spontaneously evolve into a certain distribution under different stimulation, which is closely related to decision making. (3) Short memory can be formed by coupling of network assemblies. Conclusion: The essential property of a network may contribute to some important brain functions.


💡 Research Summary

**
The authors set out to explore how much of brain function can be explained solely by the structural properties of neural networks. They begin by representing any real neural circuit as a directed adjacency matrix and then aggressively simplify it: every synapse weight is reduced to –1, 0, or +1, all neuronal thresholds are set to zero, and inhibitory neurons are replaced by direct inhibitory connections. This “simplified matrix” (denoted M) is claimed to be mathematically equivalent to any more complex network, provided enough elements are present.

Dynamics are described by a set of recursive equations (labeled (1)–(5) in the manuscript). Unfortunately the equations are garbled, making it impossible to reconstruct the exact update rule, but the authors state that each neuron’s state Y_i(t) is binary (spike = 1, silence = 0) and that the next‑step state is determined by a function F applied to a linear combination of the current network state and a “medium process” X.

Using this framework, the authors run simulations on random assemblies of 500–1000 neurons. Their main empirical observations are:

  1. Spike‑probability distribution – In a closed (no external input) random network, the probability that a given neuron spikes follows a sigmoid (logistic) curve when plotted against the average input it receives. The shape of the curve is said to emerge automatically when the proportion of excitatory (+1) and inhibitory (‑1) connections is roughly balanced (≈ 50 % each).

  2. Non‑linearity of the solution – The spike probability of each neuron cannot be predicted by simply summing the incoming weights; instead it is the solution of a nonlinear equation set that also depends on external stimulation. The authors illustrate this by re‑ordering rows of M and showing that spike probabilities do not reorder accordingly.

  3. Effect of external stimulation – When a stepwise external input is applied, the sigmoid curve steepens, polarizing neurons toward high or low firing probabilities and reducing the network’s Shannon/Boltzmann entropy. The authors interpret this as “more information leads to a more confident decision.”

  4. Stability and E/I balance – The stability of the network (i.e., whether neuron activities settle into a fixed point or a small limit cycle) is strongly linked to the ratio of +1 to –1 entries in M. Near‑equal excitatory/inhibitory proportions yield stable dynamics; deviations cause runaway excitation or inhibition. The proportion of zero entries (i.e., missing connections) does not affect stability in a closed network but does matter when external inputs are present.

  5. Memory without synaptic plasticity – The authors propose a speculative mechanism whereby a strong, brief stimulus pushes a network into a highly polarized state, thereby encoding a “phase” of firing. After the stimulus disappears, the network returns to a deterministic but different firing sequence, which can be coupled with other assemblies to store information. They illustrate a toy architecture with four coupling units feeding a downstream neuron, showing that varying thresholds can produce distinct output patterns for different prior inputs.

In the discussion, the authors argue that because their simplified model contains the “core” functions of any more detailed network, the relationships they uncovered (e.g., the dependence of overall activity on excitatory/inhibitory ratio, the sigmoid law for spike probability) should hold in real brains. They acknowledge that biological factors such as blood flow, hormones, and neuromodulators are omitted, and they suggest that the brain may perform arithmetic operations by manipulating spike frequencies rather than explicit symbolic numbers.

Critical appraisal – While the ambition to reduce brain dynamics to a tractable mathematical system is commendable, the paper suffers from several serious shortcomings. The simplifications are extreme: collapsing continuous synaptic strengths, diverse time delays, and heterogeneous neuronal electrophysiology into binary values discards most known mechanisms that shape cortical computation. The recursive equations are not presented clearly, preventing verification or replication. Simulation details (time step, integration method, random seed, exact connectivity density) are missing, making the reported sigmoid relationship and stability findings difficult to assess. Moreover, the claim that memory can be stored without any synaptic modification contradicts a vast body of experimental evidence showing that long‑term potentiation/depression underlies lasting changes in behavior. The “memory‑by‑phase” hypothesis is presented without quantitative analysis or empirical support.

Overall, the manuscript offers an interesting conceptual perspective—that certain statistical regularities (e.g., a logistic relationship between input and firing probability) may emerge from balanced random networks—but the methodological opacity, lack of rigorous mathematical proof, and insufficient biological grounding limit its impact. Future work would need to (1) provide a clear, reproducible formulation of the update rules, (2) explore a broader parameter space with transparent reporting, (3) compare model predictions against physiological data, and (4) integrate synaptic plasticity mechanisms to address how lasting memories could arise. Only then can the proposed intrinsic properties be convincingly linked to real brain function.


Comments & Academic Discussion

Loading comments...

Leave a Comment