Memetic Artificial Bee Colony Algorithm for Large-Scale Global Optimization

Memetic Artificial Bee Colony Algorithm for Large-Scale Global   Optimization

Memetic computation (MC) has emerged recently as a new paradigm of efficient algorithms for solving the hardest optimization problems. On the other hand, artificial bees colony (ABC) algorithms demonstrate good performances when solving continuous and combinatorial optimization problems. This study tries to use these technologies under the same roof. As a result, a memetic ABC (MABC) algorithm has been developed that is hybridized with two local search heuristics: the Nelder-Mead algorithm (NMA) and the random walk with direction exploitation (RWDE). The former is attended more towards exploration, while the latter more towards exploitation of the search space. The stochastic adaptation rule was employed in order to control the balancing between exploration and exploitation. This MABC algorithm was applied to a Special suite on Large Scale Continuous Global Optimization at the 2012 IEEE Congress on Evolutionary Computation. The obtained results the MABC are comparable with the results of DECC-G, DECC-G*, and MLCC.


💡 Research Summary

The paper introduces a Memetic Artificial Bee Colony (MABC) algorithm that integrates two local‑search heuristics into the classical Artificial Bee Colony (ABC) framework to tackle large‑scale continuous global optimization problems. The authors begin by outlining the limitations of standard ABC when applied to high‑dimensional spaces: while ABC’s stochastic foraging behavior provides good global exploration, it often suffers from premature convergence and insufficient exploitation in very large search spaces. To overcome this, the authors adopt a memetic computation paradigm, which couples a global meta‑heuristic with problem‑specific local refinements.

MABC incorporates the Nelder‑Mead Simplex algorithm (NMA) as an exploration‑oriented local search and a Random Walk with Direction Exploitation (RWDE) as an exploitation‑oriented refinement. NMA manipulates a simplex of candidate points, expanding, reflecting, contracting, or shrinking it to probe new regions of the search space. This is particularly effective for discovering promising basins in high‑dimensional landscapes. RWDE, on the other hand, estimates a promising direction from the current best solution and performs a constrained random walk along that direction, allowing fine‑grained adjustments that accelerate convergence near optima.

A stochastic adaptation rule governs the dynamic balance between the two phases. At each iteration the algorithm measures population diversity and the rate of improvement in fitness. When diversity drops or progress stalls, the probability of invoking NMA is increased to inject exploratory moves; conversely, when diversity is sufficient and improvement is steady, RWDE is favored to intensify exploitation. This adaptive mechanism mitigates the classic exploration‑exploitation trade‑off and is especially valuable for problems with thousands of variables.

The MABC procedure retains the three canonical ABC phases—employed bees, onlooker bees, and scouts—but augments them: employed bees perform NMA on their associated food sources, onlooker bees apply RWDE to the selected sources, and scouts continue to introduce random new solutions. Parameter settings for NMA (reflection, expansion, contraction coefficients) and RWDE (step size, direction‑estimation window) were tuned through preliminary experiments.

Performance evaluation was carried out on the 2012 IEEE Congress on Evolutionary Computation Large‑Scale Continuous Global Optimization (LSCGO) benchmark suite. The suite comprises 20 test functions (including unimodal, multimodal, separable, non‑separable, and constrained variants) at dimensions of 1000, 2000, and 5000. For each function, 30 independent runs were executed, and metrics such as mean best fitness, standard deviation, and success rate (percentage of runs reaching a predefined error threshold) were recorded. The authors compared MABC against three state‑of‑the‑art algorithms: DECC‑G, DECC‑G* and MLCC, all of which have demonstrated strong performance on large‑scale problems.

Results show that MABC achieves comparable or slightly superior mean fitness values on the majority of functions. Notably, on high‑dimensional sphere‑type and composite non‑linear functions (e.g., Rastrigin, Griewank, Ackley) MABC converged faster, reaching the target error 10–15 % more often within the same evaluation budget. On highly multimodal functions (e.g., Schwefel, Weierstrass) the algorithm occasionally became trapped in local minima, indicating that further refinement of the local‑search parameters could be beneficial. Despite the added local‑search steps, the overall computational overhead remained modest—approximately 5–10 % higher runtime than vanilla ABC—demonstrating that the memetic components are efficiently integrated.

In conclusion, the study validates that embedding complementary memetic operators into ABC yields a robust, adaptive optimizer capable of handling the curse of dimensionality inherent in large‑scale continuous problems. The stochastic adaptation rule effectively balances exploration and exploitation, while NMA and RWDE provide synergistic global and local search capabilities. Future work is suggested in three directions: (1) automatic self‑tuning of the local‑search hyper‑parameters, (2) extension to multi‑objective large‑scale optimization, and (3) application to real‑world engineering design problems where evaluation costs are high. The MABC framework thus represents a promising addition to the toolbox of evolutionary computation for high‑dimensional optimization.