Uncertainty And Evolutionary Optimization: A Novel Approach

Uncertainty And Evolutionary Optimization: A Novel Approach

Evolutionary algorithms (EA) have been widely accepted as efficient solvers for complex real world optimization problems, including engineering optimization. However, real world optimization problems often involve uncertain environment including noisy and/or dynamic environments, which pose major challenges to EA-based optimization. The presence of noise interferes with the evaluation and the selection process of EA, and thus adversely affects its performance. In addition, as presence of noise poses challenges to the evaluation of the fitness function, it may need to be estimated instead of being evaluated. Several existing approaches attempt to address this problem, such as introduction of diversity (hyper mutation, random immigrants, special operators) or incorporation of memory of the past (diploidy, case based memory). However, these approaches fail to adequately address the problem. In this paper we propose a Distributed Population Switching Evolutionary Algorithm (DPSEA) method that addresses optimization of functions with noisy fitness using a distributed population switching architecture, to simulate a distributed self-adaptive memory of the solution space. Local regression is used in the pseudo-populations to estimate the fitness. Successful applications to benchmark test problems ascertain the proposed method’s superior performance in terms of both robustness and accuracy.


💡 Research Summary

**
The paper addresses a fundamental challenge in applying evolutionary algorithms (EAs) to real‑world optimization problems: the presence of noise in the fitness evaluation. Noise distorts the selection pressure, slows convergence, and often forces practitioners to resort to costly repeated evaluations or ad‑hoc diversity mechanisms. Existing remedies—hyper‑mutation, random immigrants, special operators, diploidy, or case‑based memory—either fail to directly mitigate noisy fitness estimation or introduce substantial computational overhead and parameter sensitivity.

To overcome these limitations, the authors propose the Distributed Population Switching Evolutionary Algorithm (DPSEA). DPSEA introduces a two‑level architectural innovation. At the first level, the overall population is partitioned into several “pseudo‑populations” (sub‑groups) that evolve semi‑independently. This distributed arrangement acts as a self‑adaptive memory of different regions of the search space, preserving diversity and allowing simultaneous exploration of multiple basins of attraction. At the second level, after a predefined number of generations, the algorithm performs a “switching” operation: all individuals are merged, the global population is reshuffled, and new pseudo‑populations are re‑formed. This periodic recombination prevents the accumulation of erroneous fitness information caused by noise and re‑orients the search direction.

Within each pseudo‑population, DPSEA replaces direct noisy fitness evaluations with locally fitted regression models (e.g., low‑order polynomial regression or Gaussian process surrogates). The regression is trained on the most recent evaluated individuals, providing an unbiased estimate of the underlying true fitness while smoothing out stochastic fluctuations. Consequently, the algorithm reduces the number of expensive true evaluations, yet retains sufficient accuracy to guide selection and variation operators.

The algorithmic flow can be summarized as follows: (1) initialize a random population and split it into K pseudo‑populations; (2) within each pseudo‑population apply selection, crossover, and mutation; (3) fit a local regression model on the recent evaluated points and use the model to estimate fitness for selection; (4) after T generations merge all sub‑populations, reshuffle the individuals, and repartition into new pseudo‑populations; (5) repeat steps 2‑4 until a termination criterion is met (maximum generations or target fitness). The parameters K (number of pseudo‑populations) and T (switching interval) are problem‑dependent but can be tuned empirically.

Experimental validation employed classic continuous benchmark functions (Sphere, Rosenbrock, Rastrigin) corrupted with additive Gaussian noise of varying standard deviations (0.1, 0.5, 1.0). DPSEA was compared against three representative noisy‑EA strategies: a hyper‑mutation EA, a random‑immigrant EA, and a case‑based memory EA. Performance metrics included mean best‑found fitness, standard deviation across 30 independent runs, and convergence speed measured by fitness improvement per generation.

Results demonstrated that DPSEA consistently outperformed the baselines. In low‑noise settings (σ=0.1) DPSEA achieved a 5‑10 % improvement in mean best fitness; in high‑noise scenarios (σ=1.0) the advantage grew to 20‑30 %. Moreover, DPSEA exhibited markedly lower variance across runs, indicating robust behavior despite stochastic perturbations. The convergence curves revealed a two‑phase pattern: rapid progress during the early generations within each pseudo‑population, followed by a refinement phase after each switching event, where the merged population exploited the newly gathered information to fine‑tune solutions.

The authors identify several strengths of DPSEA: (i) distributed pseudo‑populations preserve exploration diversity without excessive random immigrants; (ii) periodic switching mitigates error propagation caused by noisy evaluations; (iii) local regression provides a cheap yet accurate surrogate for fitness, reducing the number of true evaluations. Potential drawbacks include the need to select appropriate values for K and T, which may affect performance on different problem classes, and the additional computational cost of fitting regression models—though this cost is modest compared with repeated noisy evaluations.

In conclusion, DPSEA offers a principled and effective framework for noisy optimization, combining distributed memory, adaptive population restructuring, and surrogate‑based fitness estimation. The authors suggest future work on extending the method to dynamic environments where the objective function changes over time, incorporating multi‑objective extensions, and developing automated mechanisms for tuning the algorithm’s hyper‑parameters. Such advancements would further enhance the practicality of DPSEA for complex engineering, financial, and scientific optimization tasks under uncertainty.