An improved educational competition optimizer with multi-covariance learning operators for global optimization problems
The educational competition optimizer is a recently introduced metaheuristic algorithm inspired by human behavior, originating from the dynamics of educational competition within society. Nonetheless,
The educational competition optimizer is a recently introduced metaheuristic algorithm inspired by human behavior, originating from the dynamics of educational competition within society. Nonetheless, ECO faces constraints due to an imbalance between exploitation and exploration, rendering it susceptible to local optima and demonstrating restricted effectiveness in addressing complex optimization problems. To address these limitations, this study presents an enhanced educational competition optimizer (IECO-MCO) utilizing multi-covariance learning operators. In IECO, three distinct covariance learning operators are introduced to improve the performance of ECO. Each operator effectively balances exploitation and exploration while preventing premature convergence of the population. The effectiveness of IECO is assessed through benchmark functions derived from the CEC 2017 and CEC 2022 test suites, and its performance is compared with various basic and improved algorithms across different categories. The results demonstrate that IECO-MCO surpasses the basic ECO and other competing algorithms in convergence speed, stability, and the capability to avoid local optima. Furthermore, statistical analyses, including the Friedman test, Kruskal-Wallis test, and Wilcoxon rank-sum test, are conducted to validate the superiority of IECO-MCO over the compared algorithms. Compared with the basic algorithm (improved algorithm), IECO-MCO achieved an average ranking of 2.213 (2.488) on the CE2017 and CEC2022 test suites. Additionally, the practical applicability of the proposed IECO-MCO algorithm is verified by solving constrained optimization problems. The experimental outcomes demonstrate the superior performance of IECO-MCO in tackling intricate optimization problems, underscoring its robustness and practical effectiveness in real-world scenarios.
💡 Research Summary
The paper introduces an enhanced version of the Educational Competition Optimizer (ECO), named IECO‑MCO, which integrates three novel covariance‑based learning operators to address the well‑known exploitation‑exploration imbalance of the original algorithm. ECO models human educational competition, but its simple update rules often cause premature convergence on local optima, especially for high‑dimensional, multimodal problems. IECO‑MCO retains the basic population‑based framework of ECO while adding (1) a global‑covariance operator that contracts the search step toward the current best solution, (2) a local‑covariance operator that extracts correlation information from neighboring individuals to generate diverse offspring, and (3) a time‑varying covariance scheduler that gradually reduces the covariance magnitude, thereby accelerating convergence in later generations.
Each operator is selected probabilistically at every iteration. The selection probabilities are adaptively updated based on the recent improvement contributed by each operator, allowing the algorithm to automatically emphasize the most effective operator for a given problem landscape. This adaptive mechanism creates a dynamic balance: early generations favor broad exploration via large covariance matrices, while later generations focus on fine‑grained exploitation.
The authors evaluate IECO‑MCO on the CEC 2017 (30‑dimensional) and CEC 2022 (50‑dimensional) benchmark suites, comprising 30 functions each, covering unimodal, multimodal, hybrid, and composition types. Comparative algorithms include the original ECO, Particle Swarm Optimization, Differential Evolution, Grey Wolf Optimizer, and several recent improved meta‑heuristics. Performance metrics are best‑found value, mean and standard deviation over 30 independent runs, and convergence curves. Statistical significance is assessed with the Friedman test (followed by Nemenyi post‑hoc analysis), Kruskal‑Wallis test, and Wilcoxon rank‑sum test.
Results show that IECO‑MCO consistently achieves lower mean errors and tighter deviations than all competitors. Its average ranking is 2.213 on the CEC 2017 suite and 2.488 on CEC 2022, the best among all tested methods. Notably, on highly multimodal functions (e.g., F1‑F5) and complex hybrid functions (F16‑F20), IECO‑MCO reaches near‑optimal values within the first 1,000 generations, whereas many rivals still exhibit slow progress. Convergence plots illustrate a steeper descent in the early phase, followed by a smooth plateau, confirming the intended exploration‑exploitation transition.
To demonstrate practical relevance, the algorithm is applied to 13 standard constrained optimization problems (G01‑G13). IECO‑MCO respects constraints by incorporating penalty terms and leverages the covariance operators to steer the search away from infeasible regions. In all cases it finds solutions with negligible constraint violation and objective values comparable to or better than specialized constrained optimizers.
The study acknowledges that adding three operators increases the number of hyper‑parameters (e.g., covariance scaling factors, adaptation rates). Currently these are set empirically; future work should focus on self‑adaptive parameter control or meta‑learning strategies. Moreover, the operator‑selection scheme is based on simple probability updates; more sophisticated reinforcement‑learning‑based schedulers could further improve performance, especially in dynamic or multi‑objective settings.
Potential extensions include (i) automatic parameter adaptation via fuzzy logic or Bayesian optimization, (ii) adaptation to multi‑objective problems using Pareto‑based selection, (iii) parallel GPU implementation to mitigate the computational overhead of covariance calculations, and (iv) hybridization with problem‑specific heuristics for large‑scale engineering design.
In summary, IECO‑MCO presents a well‑justified methodological advancement: by exploiting covariance information at both global and local levels and by dynamically adjusting operator usage, it overcomes the primary weaknesses of ECO. Extensive benchmark testing, rigorous statistical validation, and successful constrained‑optimization experiments collectively demonstrate that IECO‑MCO is a robust, efficient, and practically applicable meta‑heuristic for contemporary global optimization challenges.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...