📝 Original Info
- Title: Effective linkage learning using low-order statistics and clustering
- ArXiv ID: 0710.2782
- Date: 2007-10-16
- Authors: Researchers from original ArXiv paper
📝 Abstract
The adoption of probabilistic models for the best individuals found so far is a powerful approach for evolutionary computation. Increasingly more complex models have been used by estimation of distribution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective options, since those models are able to capture interactions of high order among the variables of a problem. Diversity preservation, through niching techniques, has also shown to be very important to allow the identification of the problem structure as much as for keeping several global optima. Recently, clustering was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs was not shown to be much improved by clustering, except for some simple multimodal problems. This work proposes and evaluates a combination operator guided by a measure from information theory which allows a clustered low-order EDA to effectively solve a comprehensive range of benchmark optimization problems.
💡 Deep Analysis
Deep Dive into Effective linkage learning using low-order statistics and clustering.
The adoption of probabilistic models for the best individuals found so far is a powerful approach for evolutionary computation. Increasingly more complex models have been used by estimation of distribution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective options, since those models are able to capture interactions of high order among the variables of a problem. Diversity preservation, through niching techniques, has also shown to be very important to allow the identification of the problem structure as much as for keeping several global optima. Recently, clustering was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs was not shown to be much improved by clustering, except for some simple multimodal problems. This work proposes and evaluates a combination operator guided by a measure from in
📄 Full Content
arXiv:0710.2782v2 [cs.NE] 16 Oct 2007
Effective linkage learning using
low-order statistics and clustering1
Leonardo Emmendorfer†, Aurora Pozo∗
†Numerical Methods in Egnineering, PhD program
∗Department of Computer Science
Federal University of Paran´a, Brazil
{leonardo,aurora}@inf.ufpr.br
Abstract
The adoption of probabilistic models for the best individuals found so far is a powerful approach for
evolutionary computation. Increasingly more complex models have been used by estimation of distri-
bution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard
optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective op-
tions, since those models are able to capture interactions of high order among the variables of a problem.
Diversity preservation, through niching techniques, has also shown to be very important to allow the
identification of the problem structure as much as for keeping several global optima. Recently, clustering
was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs
was not shown to be much improved by clustering, except for some simple multimodal problems. This
work proposes and evaluates a combination operator guided by a measure from information theory which
allows a clustered low-order EDA to effectively solve a comprehensive range of benchmark optimization
problems.
Keywords: clustering, evolutionary computation, linkage learning, niching.
1
Introduction
Evolutionary algorithms solve optimization problems by evolving successive populations of so-
lutions until convergence occurs. Two steps are usually present at each generation: selection of
promising solutions and creation of new solutions in order to obtain a new population.
Combination of genetic information is a major concern in evolutionary computation. In the
simple genetic algorithm (sGA) [1] this mechanism is implemented as the crossover operator,
which creates a new individual from two parents by combining portions of both strings. Recently,
estimation of distribution algorithms (EDAs) started a novel approach for learning information
from the best individuals, which involves inferring a probabilistic model and sampling from this
model in order to generate the next population. Combination of information is achieved in EDAs
1Submitted to IEEE Transactions on Evolutionary Computation
1
since a single model is built from several good individuals. Unfortunately, combining different
individuals may lead to poor results if the model adopted is not expressive enough.
Simpler order-1 EDAs adopt probabilistic models which assume independence among genes.
This class of EDA is known for its simplicity and computational efficiency in model learning,
since no search for model structure needs to be performed [2]. Further, the simple conception
and implementation should make those algorithms very attractive. Their low effectiveness on
harder benchmark problems, however, is unacceptable. This is a major drawback, since genetic
and evolutionary algorithms are known for their wide applicability and robustness [3].
High order EDAs, on the other hand, are based on learning the linkage among genes by in-
ferring expressive probabilistic models based on searching for a factorization, which captures
the dependencies among genes. Good results are reported for several problems in the literature
whereas a high computational cost associated to the model induction stage is imposed in this class
of EDAs. Finding a factorization can be a computationally expensive process and the resulting
graph is often a suboptimal solution [4][5].
One of the most important efforts to make EDAs more effective is to adopt clustering as a
strong niching approach, inducing the preservation of diversity in the population. Niching is
crutial for evolutionary computation in general [6] and for EDAs in particular [7]. It improves
the identification of the problem structure as much as enhances the chances of finding a higher
number of global optima on multimodal problems. K-means clustering algorithm [8] has recently
been applied as a niching technique based on grouping genotypically similar solutions together.
The performance of simpler low-order EDAs, however, was not show to be much improved by
clustering except for some simple unstructured multimodal problems. Low-order clustered EDAs
have not been able to solve hard deceptive structured problems [9].
The main contribution of this paper is to show that a simple low-order EDA aided by cluster-
ing the population and guided by information measures is able to perform linkage learning and,
therefore, solve a representative set of benchmark problems. This work extends a previous paper
[10], where some of the ideas and results reported here were first presented. Here the foundations
of the algorithm and operators proposed are discussed in a more detailed fashion, and a wider
set of experiments is presented.
…(Full text truncated)…
Reference
This content is AI-processed based on ArXiv data.