A Multidimensional Cascade Neuro-Fuzzy System with Neuron Pool Optimization in Each Cascade

Reading time: 5 minute
...

📝 Abstract

A new architecture and learning algorithms for the multidimensional hybrid cascade neural network with neuron pool optimization in each cascade are proposed in this paper. The proposed system differs from the well-known cascade systems in its capability to process multidimensional time series in an online mode, which makes it possible to process non-stationary stochastic and chaotic signals with the required accuracy. Compared to conventional analogs, the proposed system provides computational simplicity and possesses both tracking and filtering capabilities.

💡 Analysis

A new architecture and learning algorithms for the multidimensional hybrid cascade neural network with neuron pool optimization in each cascade are proposed in this paper. The proposed system differs from the well-known cascade systems in its capability to process multidimensional time series in an online mode, which makes it possible to process non-stationary stochastic and chaotic signals with the required accuracy. Compared to conventional analogs, the proposed system provides computational simplicity and possesses both tracking and filtering capabilities.

📄 Content

A Multidimensional Cascade Neuro-Fuzzy System with Neuron Pool Optimization in Each Cascade

Yevgeniy V. Bodyanskiy Kharkiv National University of Radio Electronics, Kharkiv, Ukraine,
Email: bodya@kture.kharkov.ua

Oleksii K. Tyshchenko and Daria S. Kopaliani Kharkiv National University of Radio Electronics, Kharkiv, Ukraine, Email: { lehatish, daria.kopaliani }@gmail.com

Abstract— A new architecture and learning algorithms for the multidimensional hybrid cascade neural network with neuron pool optimization in each cascade are proposed in this paper. The proposed system differs from the well-known cascade systems in its capability to process multidimensional time series in an online mode, which makes it possible to process non-stationary stochastic and chaotic signals with the required accuracy. Compared to conventional analogs, the proposed system provides computational simplicity and possesses both tracking and filtering capabilities.

Index Terms— learning method, cascade system, neo-fuzzy neuron, computational intelligence.

I. INTRODUCTION Today artificial neural networks (ANNs) and neuro-fuzzy systems (NFSs) are successfully used in a wide range of data processing problems (when data can be presented either in the form of “object-property” tables or in the form of time series, often produced by non-stationary nonlinear stochastic or chaotic systems). The advantages ANNs and NFSs have over other existing approaches derive from their universal approximating capabilities and learning capacities. Conventionally “learning” is defined as a process of adjusting synaptic weights using an optimization procedure that involves searching for the extremum of a given learning criterion. The learning process quality can be improved by adjusting a network topology along with its synaptic weights [1, 2]. This idea is the foundation of evolving computational intelligence systems [3, 4]. One of the most successful implementations of this approach is cascade- correlation neural networks [5–8] due to their high degree of efficiency and learning simplicity of both synaptic weights and a network topology). Such a network starts off with a simple architecture consisting of a pool (ensemble) of neurons which are trained independently (the first cascade). Each neuron in the pool can have a different activation function and a different learning algorithm. The neurons in the pool do not interact with each other while they are trained. After all the neurons in the pool of the first cascade have had their weights adjusted, the best neuron with respect to a learning criterion forms the first cascade and its synaptic weights can no longer be adjusted. Then the second cascade is formed usually out of similar neurons in the training pool. The only difference is that neurons which are trained in the pool of the second cascade have an additional input (and therefore an additional synaptic weight) which is an output of the first cascade. Similar to the first cascade, the second cascade will eliminate all but one neuron showing the best performance whose synaptic weights will thereafter be fixed. Neurons of the third cascade have two additional inputs, namely the outputs of the first and second cascades. The evolving network continues to add new cascades to its architecture until it reaches the desired quality of problem solving over the given training set. Authors of the most popular cascade neural network, CasCorLA, S. E. Fahlman and C. Lebiere, used elementary Rosenblatt perceptrons with traditional sigmoidal activation functions and adjusted synaptic weights using the Quickprop-algorithm (a modification of the -learning rule). Since the outgoing signal of such neurons is non- linearly dependent on its synaptic weights, the learning rate cannot be increased for such neurons. In order to avoid multi-epoch learning [9–16], different types of neurons (with outputs that depend linearly on synaptic weights) should be used as network nodes. This would allow the use of optimal learning algorithms in terms of speed and process data as it is an input to the network. However, if the network is learning in an online mode, it is impossible to determine the best neuron in the pool. While working with non-stationary objects, one neuron of the training pool can be identified as the best for one part of the training set, but not for the others. Thus we suggest that all neurons retain in the training pool and a certain optimization procedure (generated according to a general network quality criterion) is used to determine an output of the cascade.
It should be noticed that the well-known cascade neural networks implement non-linear mapping 1  n R R , i.e. they are a systems with a single output. At the same time, many problems (which are solved with the help of ANNs and NFSs) require the multidimensional mapping  n g R R implementation, which leads to the fact that g times more neuro

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut