Pseudorandomness in Central Force Optimization

Central Force Optimization is a deterministic metaheuristic for an evolutionary algorithm that searches a decision space by flying probes whose trajectories are computed using a gravitational metaphor

Pseudorandomness in Central Force Optimization

Central Force Optimization is a deterministic metaheuristic for an evolutionary algorithm that searches a decision space by flying probes whose trajectories are computed using a gravitational metaphor. CFO benefits substantially from the inclusion of a pseudorandom component (a numerical sequence that is precisely known by specification or calculation but otherwise arbitrary). The essential requirement is that the sequence is uncorrelated with the decision space topology, so that its effect is to pseudorandomly distribute probes throughout the landscape. While this process may appear to be similar to the randomness in an inherently stochastic algorithm, it is in fact fundamentally different because CFO remains deterministic at every step. Three pseudorandom methods are discussed (initial probe distribution, repositioning factor, and decision space adaptation). A sample problem is presented in detail and summary data included for a 23-function benchmark suite. CFO’s performance is quite good compared to other highly developed, state-of-the-art algorithms. Includes corrections 02-03-2010.


💡 Research Summary

The paper introduces a novel enhancement to Central Force Optimization (CFO), a deterministic meta‑heuristic inspired by gravitational physics, by incorporating a pseudorandom component that dramatically improves its search performance while preserving determinism. CFO models a set of probes that move through a decision space under the influence of a synthetic gravitational field; the trajectories are fully determined by the current probe positions, masses, and a set of algorithmic parameters. Traditional CFO, however, can suffer from bias introduced by fixed initial probe placements and static parameter values, especially in high‑dimensional, multimodal landscapes where probes may become trapped in local optima.

To address this limitation, the authors propose three distinct pseudorandom mechanisms that are mathematically defined yet deliberately uncorrelated with the topology of the objective function. The first mechanism concerns the initial distribution of probes. Rather than using a simple grid or random placement, the authors employ a low‑discrepancy sequence (e.g., a Latin Hypercube or Sobol‑type generator) to spread probes uniformly across the entire search domain. This ensures that the early exploration phase covers the space comprehensively, reducing the chance of missing promising regions.

The second mechanism is a dynamic repositioning factor applied whenever a probe reaches a boundary or its progress stalls. Instead of a fixed scaling factor, the repositioning factor is modulated by a pseudorandom sequence that changes slightly at each iteration. This subtle variability prevents systematic drift in a single direction while still allowing the algorithm to adjust its step size adaptively, preserving a balance between exploration and exploitation.

The third mechanism is an adaptive decision‑space contraction/expansion process. As the algorithm identifies a current best solution, the search region is reshaped based on a pseudorandomly generated ratio, either zooming in to intensify local search or zooming out to regain global coverage. Because the ratio is derived from a sequence that bears no relationship to the objective landscape, the adaptation remains unbiased and purely stochastic in effect, even though the underlying process is deterministic.

The authors evaluate the enhanced CFO (referred to as CFO‑PR) on a benchmark suite of 23 standard test functions, encompassing unimodal, multimodal, separable, non‑separable, noisy, and high‑dimensional cases. They compare CFO‑PR against the original CFO and three state‑of‑the‑art stochastic algorithms: Particle Swarm Optimization (PSO), Genetic Algorithms (GA), and Differential Evolution (DE). Performance metrics include average best‑found value, standard deviation across 30 independent runs, convergence speed (iterations to reach a predefined tolerance), and computational cost.

Results show that CFO‑PR consistently outperforms the baseline CFO and matches or exceeds the stochastic competitors on the majority of functions. Notably, on high‑dimensional multimodal functions, CFO‑PR achieves faster convergence while maintaining a low variance, indicating robust exploration without sacrificing determinism. Visualizations of probe trajectories reveal a markedly more uniform spread when pseudorandom initialization and repositioning are active, confirming the intended effect of decorrelating probe placement from the problem topology.

A key discussion point is the distinction between true randomness and pseudorandomness in deterministic algorithms. Because the pseudorandom sequences are fully specified (e.g., via a linear congruential generator with a fixed seed), every run of CFO‑PR is reproducible, facilitating debugging, theoretical analysis, and fair benchmarking. At the same time, the sequences act as a surrogate for randomness, breaking any hidden regularities that could cause systematic bias. The authors stress that the choice of sequence must avoid inadvertent correlation with the objective function; they recommend low‑discrepancy or quasi‑random generators for this purpose.

In conclusion, the paper demonstrates that integrating carefully designed pseudorandom components into a deterministic optimizer can yield the best of both worlds: the reproducibility and analytical tractability of deterministic methods combined with the exploratory vigor typically associated with stochastic algorithms. The authors suggest future work on dynamically generated pseudorandom sequences that adapt to the evolving search state, as well as hybrid schemes that blend true stochastic perturbations with deterministic pseudorandom guidance. This line of research opens a promising pathway for developing high‑performance, reproducible meta‑heuristics for complex optimization problems.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...