Training a Functional Link Neural Network Using an Artificial Bee Colony for Solving a Classification Problems
Artificial Neural Networks have emerged as an important tool for classification and have been widely used to classify a non-linear separable pattern. The most popular artificial neural networks model is a Multilayer Perceptron (MLP) as it is able to perform classification task with significant success. However due to the complexity of MLP structure and also problems such as local minima trapping, over fitting and weight interference have made neural network training difficult. Thus, the easy way to avoid these problems is to remove the hidden layers. This paper presents the ability of Functional Link Neural Network (FLNN) to overcome the complexity structure of MLP by using single layer architecture and propose an Artificial Bee Colony (ABC) optimization for training the FLNN. The proposed technique is expected to provide better learning scheme for a classifier in order to get more accurate classification result
💡 Research Summary
The paper addresses two persistent challenges in neural‑network‑based classification: the structural complexity of multilayer perceptrons (MLPs) and the difficulties of training them due to local minima, over‑fitting, and weight interference. To bypass these issues, the authors adopt a Functional Link Neural Network (FLNN), a single‑layer architecture that eliminates hidden layers entirely. In an FLNN, the raw input vector is first transformed by a set of nonlinear basis functions (e.g., polynomial, trigonometric, logarithmic) to generate an expanded feature vector φ(x). This high‑dimensional representation is then fed directly to a linear output layer, drastically reducing the number of trainable parameters while preserving the network’s ability to model nonlinear decision boundaries.
The main obstacle of FLNNs lies in the optimization of the weight vector that connects the expanded feature space to the output. Traditional gradient‑based methods (e.g., back‑propagation) still suffer from the same pitfalls as MLPs when applied to this high‑dimensional space. Consequently, the authors propose to train the FLNN using the Artificial Bee Colony (ABC) algorithm, a population‑based meta‑heuristic inspired by the foraging behavior of honey bees. In ABC, three types of bees—employed, onlooker, and scout—cooperate to explore and exploit the search space. Each bee represents a candidate weight vector; the fitness of a candidate is measured by the classification error (or a cross‑validated error rate). Employed bees perturb their current solution to generate a neighbor; onlookers probabilistically select promising solutions based on fitness; scouts randomly re‑initialize poor solutions after a “limit” of unsuccessful attempts. This iterative process balances global exploration and local exploitation, allowing the algorithm to escape local minima and converge toward a near‑optimal weight configuration.
The authors detail the algorithmic settings: the number of bees (population size), the maximum number of iterations, and the limit parameter governing scout activation. Weight initialization is uniform in
Comments & Academic Discussion
Loading comments...
Leave a Comment