Variations of Genetic Algorithms
The goal of this project is to develop the Genetic Algorithms (GA) for solving the Schaffer F6 function in fewer than 4000 function evaluations on a total of 30 runs. Four types of Genetic Algorithms (GA) are presented - Generational GA (GGA), Steady-State (mu+1)-GA (SSGA), Steady-Generational (mu,mu)-GA (SGGA), and (mu+mu)-GA.
đĄ Research Summary
The paper investigates the performance of four wellâknown variants of Genetic Algorithms (GAs) on the continuous benchmark problem SchafferâŻF6, with the explicit goal of solving the problem in fewer than 4,000 function evaluations over 30 independent runs. The four GA variants examined are: (1) a Generational GA (GGA), (2) a SteadyâState (”+1)âGA (SSGA), (3) a SteadyâGenerational (”,”)âGA (SGGA), and (4) a (”+”)âGA. For each variant three crossover operators are considered â SingleâPoint Crossover (SPX), MidâPoint Crossover (MPX), and Blend Crossover (BLX). All experiments use a fixed population size of 16, a mutation rate of 0.012, binary tournament selection, and a search space bounded by â100 to +100 in both dimensions.
Methodology
Each GAâcrossover combination is run 30 times; the number of function evaluations required to reach the termination criterion is recorded. The authors then apply a twoâstage statistical analysis. First, an ANOVA test determines whether any pairwise differences are statistically significant (pâŻ<âŻ0.05). If a significant difference is found, the worstâperforming algorithm is removed and the test is repeated until all remaining algorithms are statistically indistinguishable (pâŻâ„âŻ0.05). Next, a Studentâs tâtest (oneâ or twoâtailed as appropriate) is used to refine the equivalence classes, with |t|âŻ>âŻ1.7 indicating a significant difference, |t|âŻââŻ1.5 indicating the same equivalence class, and |t|âŻââŻ1.9 indicating a different class. The authors also use an Fâtest to decide whether to assume equal variances in the tâtest.
Algorithmic Details
- GGA creates a full set of P offspring each generation, replaces the entire parent population, and therefore performs P function evaluations per cycle. No elitism is applied; the generation gap is zero.
- SSGA (”+1âGA) selects two parents, produces a single offspring, and replaces the worst individual. Only one function evaluation is required per cycle, dramatically reducing evaluation cost.
- SGGA (”,”âGA) also creates a single offspring per cycle, but replaces a randomly chosen nonâbest individual rather than the worst. Two evaluations are required per cycle.
- (”+”)âGA generates a child population of size ”, merges it with the parent population (total 2”), and then selects the top ” individuals. This approach incurs the highest perâcycle evaluation cost but guarantees that the best individuals from both generations survive.
Crossover Operators
- SPX splits each parent chromosome at a single cut point, producing two children. It works for both binary and realâcoded representations but introduces limited diversity.
- MPX computes the midpoint of each gene pair and creates a single realâcoded offspring. It tends to concentrate the search around the average of the parents, improving convergence speed on continuous problems.
- BLX selects each gene of the offspring uniformly at random within the interval defined by the two parent genes, thereby increasing diversity but potentially slowing convergence due to excessive randomness.
Results
The statistical analysis yields the following key findings:
- SSGAâMPX achieves the lowest average number of function evaluations (ââŻ1,497) among all SSGA configurations, and all three SSGA crossover methods belong to the same equivalence class.
- SGGAâMPX records an average of ââŻ2,636 evaluations, while SGGAâMPX is the worst performer overall with an average of ââŻ3,637 evaluations, placing SGGA in a distinct, lessâefficient class.
- GGAâMPX and (”+”)âGAâMPX both achieve average evaluation counts around 1,600. ANOVA produces a pâvalueâŻ>âŻ0.05 when comparing these two, and the subsequent tâtest yields |t|âŻââŻ0.09, indicating that they belong to the same equivalence class. Consequently, these two configurations are identified as the most efficient overall.
- For the other crossover operators (SPX and BLX), the average evaluation counts are consistently higher than for MPX across all GA variants, and the statistical tests confirm significant differences.
Overall, the MPX operator consistently outperforms SPX and BLX in this continuousâdomain setting, regardless of the underlying GA variant. While SSGA and SGGA reduce perâgeneration evaluation cost, their final convergence quality is inferior to that of GGA and (”+”)âGA when paired with MPX.
Interpretation and Implications
The study demonstrates that, under a strict evaluation budget, the choice of crossover operator can have a larger impact on performance than the choice of GA variant. MPXâs averaging mechanism efficiently guides the search toward promising regions of the continuous landscape, leading to fewer evaluations needed to reach the target fitness. The (”+”)âGAâs elitist selection ensures that highâquality solutions are retained, compensating for its higher perâgeneration evaluation cost. Consequently, for problems similar to SchafferâŻF6, practitioners should consider using a traditional generational GA or a (”+”)âGA combined with MPX crossover to achieve the best tradeâoff between evaluation economy and solution quality.
Limitations and Future Work
The experiments are confined to a single benchmark function; extending the analysis to a broader suite of continuous and discrete problems would test the generality of the conclusions. Moreover, exploring adaptive mutation rates, dynamic population sizes, or selfâadjusting crossover strategies could further improve efficiency. Finally, integrating multiâobjective optimization or realâtime constraint handling would broaden the applicability of the findings to more complex, realâworld scenarios.
In summary, the paper provides a thorough empirical comparison of four GA variants and three crossover operators on a constrained evaluation budget, identifies MPXâaugmented GGA and (”+”)âGA as the top performers, and offers practical guidance for researchers and engineers seeking efficient evolutionary solutions under tight computational limits.
Comments & Academic Discussion
Loading comments...
Leave a Comment