Variable Search Stepsize for Randomized Local Search in Multi-Objective Combinatorial Optimization

Variable Search Stepsize for Randomized Local Search in Multi-Objective Combinatorial Optimization
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Over the past two decades, research in evolutionary multi-objective optimization has predominantly focused on continuous domains, with comparatively limited attention given to multi-objective combinatorial optimization problems (MOCOPs). Combinatorial problems differ significantly from continuous ones in terms of problem structure and landscape. Recent studies have shown that on MOCOPs multi-objective evolutionary algorithms (MOEAs) can even be outperformed by simple randomised local search. Starting with a randomly sampled solution in search space, randomised local search iteratively draws a random solution (from an archive) to perform local variation within its neighbourhood. However, in most existing methods, the local variation relies on a fixed neighbourhood, which limits exploration and makes the search easy to get trapped in local optima. In this paper, we present a simple yet effective local search method, called variable stepsize randomized local search (VS-RLS), which adjusts the stepsize during the search. VS-RLS transitions gradually from a broad, exploratory search in the early phases to a more focused, fine-grained search as the search progresses. We demonstrate the effectiveness and generalizability of VS-RLS through extensive evaluations against local search and MOEAs methods on diverse MOCOPs.


💡 Research Summary

The paper addresses a notable gap in multi‑objective combinatorial optimization (MOCOP): while evolutionary multi‑objective algorithms (MOEAs) dominate the literature, they often underperform on discrete problems due to limited local search capability. Recent work has shown that simple randomized local search (RLS) can surpass MOEAs on several MOCOPs, yet existing RLS variants rely on a fixed neighbourhood (e.g., 1‑bit flips), which hampers exploration and leads to premature stagnation at Pareto local optima (PLOs).

To overcome this limitation, the authors propose Variable‑Stepsize Randomized Local Search (VS‑RLS). The algorithm starts with the smallest possible stepsize (r = 1) and progressively expands the neighbourhood size within a single iteration until a non‑dominated offspring is found or a predefined maximum stepsize (R_max) is reached. The process is as follows:

  1. Initialise an archive with a randomly sampled solution.
  2. At each iteration, randomly select a parent from the archive.
  3. Generate an offspring by sampling uniformly from the neighbourhood N_r(parent) defined by the current stepsize r.
  4. If the offspring is non‑dominated with respect to the archive, insert it and terminate the iteration.
  5. Otherwise, increase r (r ← r + 1) and repeat step 3.
  6. When r exceeds R_max, abandon the current parent and select a new one.

R_max is not static; it is set large (often the problem dimension D) at the early stage to encourage global exploration, and gradually reduced as the archive fills, thereby shifting the algorithm’s focus toward exploitation. This “exploration‑to‑exploitation” schedule is the core novelty of VS‑RLS.

The authors illustrate the mechanism with a bi‑objective toy graph where a fixed‑step RLS becomes trapped at a local optimum, whereas VS‑RLS expands its stepsize, samples from a broader region, and escapes.

Experimental evaluation covers four benchmark MOCOP families: multi‑objective knapsack, traveling salesman (TSP), quadratic assignment (QAP), and NK‑landscape. For each problem, 30 independent runs were performed under a common evaluation budget (≈10⁴·D function evaluations). Performance was measured using Hypervolume (HV) and Inverted Generational Distance (IGD). Baselines included the classic SEMO (fixed‑step RLS) and three widely used MOEAs: NSGA‑II, MOEA/D, and SMS‑EMOA.

Results show that VS‑RLS consistently outperforms SEMO, achieving HV improvements of roughly 10 %–25 % across all problem sets and delivering lower IGD values, indicating a closer approximation to the true Pareto front. Compared with MOEAs, VS‑RLS is competitive, especially on smaller‑scale instances where MOEAs often struggle with the discrete landscape. Importantly, the additional computational overhead remains modest because the number of sampled neighbours per iteration grows only linearly with the current stepsize, and the overall evaluation budget is fixed.

The paper also discusses the algorithm’s generality. In binary‑encoded problems, the stepsize corresponds to the number of bits flipped; in permutation‑based problems, it maps to the number of 2‑opt or swap operations. A simple adaptive schedule (e.g., halve R_max after 50 % of the budget) works well without problem‑specific tuning, highlighting the method’s practicality.

Finally, the authors suggest several avenues for future work: (i) theoretical runtime analysis of VS‑RLS on artificial MOCOPs, (ii) more sophisticated dynamic stepsize control (e.g., reinforcement‑learning‑based adaptation), and (iii) hybridisation with MOEAs, where VS‑RLS could serve as an embedded local improvement operator within an evolutionary loop. The provided open‑source implementation (GitHub) facilitates reproducibility and further exploration.

In summary, VS‑RLS introduces a lightweight yet powerful modification to randomized local search—dynamic stepsize expansion—that markedly improves both convergence speed and solution diversity on a broad spectrum of multi‑objective combinatorial problems, positioning it as a strong alternative or complement to existing MOEA frameworks.


Comments & Academic Discussion

Loading comments...

Leave a Comment