Neural network model with discrete and continuous information representation

Reading time: 6 minute
...

📝 Abstract

An associative memory model and a neural network model with a Mexican-hat type interaction are the two most typical attractor networks used in the artificial neural network models. The associative memory model has discretely distributed fixed-point attractors, and achieves a discrete information representation. On the other hand, a neural network model with a Mexican-hat type interaction uses a line attractor to achieves a continuous information representation, which can be seen in the working memory in the prefrontal cortex and columnar activity in the visual cortex. In the present study, we propose a neural network model that achieves discrete and continuous information representation. We use a statistical-mechanical analysis to find that a localized retrieval phase exists in the proposed model, where the memory pattern is retrieved in the localized subpopulation of the network. In the localized retrieval phase, the discrete and continuous information representation is achieved by using the orthogonality of the memory patterns and the neutral stability of fixed points along the positions of the localized retrieval. The obtained phase diagram suggests that the antiferromagnetic interaction and the external field are important for generating the localized retrieval phase.

💡 Analysis

An associative memory model and a neural network model with a Mexican-hat type interaction are the two most typical attractor networks used in the artificial neural network models. The associative memory model has discretely distributed fixed-point attractors, and achieves a discrete information representation. On the other hand, a neural network model with a Mexican-hat type interaction uses a line attractor to achieves a continuous information representation, which can be seen in the working memory in the prefrontal cortex and columnar activity in the visual cortex. In the present study, we propose a neural network model that achieves discrete and continuous information representation. We use a statistical-mechanical analysis to find that a localized retrieval phase exists in the proposed model, where the memory pattern is retrieved in the localized subpopulation of the network. In the localized retrieval phase, the discrete and continuous information representation is achieved by using the orthogonality of the memory patterns and the neutral stability of fixed points along the positions of the localized retrieval. The obtained phase diagram suggests that the antiferromagnetic interaction and the external field are important for generating the localized retrieval phase.

📄 Content

An associative memory model and a neural network model with a Mexican-hat type interaction are the two most typical attractor networks used in artificial neural network models.

The associative memory model represented by the Hopfield model has discretely distributed fixed-point attractors. 1) On the other hand, the model with a Mexican-hat type interaction, modeled after the hyper-column of the primary visual cortex and the frontal cortex during the memory-guided saccade, has continuously distributed fixed-point attractors (what is termed a line attractor) that reflect the distant relationship between inputs. [2][3][4][5] Recent electrophysiological studies suggest that neurons represent informations as sparse and local neuronal excitations in the inferior temporal cortex (IT), 6,7) specifically, neurons in IT area achieve a discrete and continuous information representation by using both positions of a local excitation and microscopic firing patterns of a sparse activity in the local excitation. For instance, neurons measure angles of a visual stimulus with the positions of a local excitation, and consequently the corresponding information representation (firing pattern) continuously changes when the visual stimulus is rotated. 6) Meanwhile, neurons discriminate visual stimuli by differences between microscopic firing patterns of a sparse activity, and consequently the information representation discretely changes when the visual stimulus is replaced. 7) Based on these evidence, Wada et al. 8) proposed a self-organizing map (SOM) model of IT area, and confirmed by numerical experiment that the sparse and local neuronal excitation is selforganized in their model. Additionally, Hamaguchi et al. 4,5) proposed an Ising spin neural network model with a disordered Mexican-hat type interaction, and verified that the sparse and local neuronal excitation is achieved in their model. However, these models do not store more than one pattern. In addition, Ichiki et al. 9) analyzed a model with spatially modulated Hebbian interactions. Equilibria of their model are transformed into those of the model with the Mexican-hat type interaction 4,5) by a gauge transformation based on a stored pattern.

In this study, we propose a solvable neural network model, that achieves the discrete and continuous information representation. Our model is based on a Hebbian interaction weighted by a Mexican-hat type interaction. 9) We use a statistical-mechanical analysis to show that a localized retrieval (LR) phase exists in the proposed model, where a localized part of the stored pattern is retrieved. The LR states are discretely and continuously distributed in the network state space, that is, our model has discretely distributed line attractors. Thus, the LR states correspond to the discrete and continuous information representation. Additionally, we use a stability analysis and a phase diagram to we find that the stability of the LR states depends on the number of the stored patterns, and that an antiferromagnetic interaction and a negative external magnetic field are essential for stabilizing the LR states.

We analyze an Ising spin neural network model with a microscopic state defined by the

Here S θ i = 1 if neuron i fires, and

N -π on a one-dimensional ring indexed by θ i ∈ (-π, π], as shown in Fig. 1(a). The Hamiltonian of the system we are going to study is

where h is an external magnetic field (representing a common external input to neurons). The interaction J θ i θ j (representing the synaptic interaction between the ith neuron and the jth neuron) consists of a Hebbian interaction weighted by a Mexican-hat type interaction, and the antiferromagnetic interaction:

where J 0 represents the strength of the Hebbian interaction, k represents the strength of the weighting by the Mexican-hat type interaction, and g represents the strength of the antiferromagnetic interaction. ξ µ θ i denotes the ith component of the stored pattern vector ξ µ ∈ {-1, 1} N , and is a quenched independent random variable taking 1 or -1 with a probability 1 2 . p is the number of stored patterns. We restrict ourselves to a case where p is finite in the thermodynamic limit N → ∞ (p = O(1)) throughout this paper. The Hopfield model in our model 1) is one where k = g = h = 0. The model with the Mexican-hat type interaction in our model 2,4,5) is one where k → ∞ with J 0 k remaining finite, g = 0, p = 1, and ξ 1 θ i = 1(1 ≤ ∀i ≤ N ). Thus, our model includes the Hopfield model and the model with the Mexican-hat type interaction as special cases. In addition, the case where J 0 < 0, k < 0 and g = h = 0 in our model has been previously studied. 9) The antiferromagnetic interaction g and the external magnetic field h play significant roles in generating the localized retrieval phase, where a localized part of a stored pattern is retrieved, as described later.

In this section, we calculate the free energy per neuron and the Hessian matrix. We define the fol

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut