First-improvement vs. Best-improvement Local Optima Networks of NK Landscapes

First-improvement vs. Best-improvement Local Optima Networks of NK   Landscapes

This paper extends a recently proposed model for combinatorial landscapes: Local Optima Networks (LON), to incorporate a first-improvement (greedy-ascent) hill-climbing algorithm, instead of a best-improvement (steepest-ascent) one, for the definition and extraction of the basins of attraction of the landscape optima. A statistical analysis comparing best and first improvement network models for a set of NK landscapes, is presented and discussed. Our results suggest structural differences between the two models with respect to both the network connectivity, and the nature of the basins of attraction. The impact of these differences in the behavior of search heuristics based on first and best improvement local search is thoroughly discussed.


💡 Research Summary

The paper extends the Local Optima Network (LON) model, a graph‑based representation of combinatorial fitness landscapes, by incorporating a first‑improvement (greedy‑ascent) hill‑climbing algorithm instead of the traditionally used best‑improvement (steepest‑ascent) method. In the classic LON framework, each node corresponds to a local optimum and edges encode the probability that a random start point will transition from the basin of one optimum to another under a given local search rule. By redefining basins using first‑improvement, the authors create an alternative network that reflects a different exploration dynamics.

To evaluate the structural differences, the authors conduct an exhaustive empirical study on NK landscapes, a canonical tunable model where N denotes the number of binary variables and K controls epistatic interaction and ruggedness. They generate 1,000 random instances for each combination of N = 14, 16, 18, 20 and K = 2, 4, 6, 8, 10, 12, covering a wide range of difficulty levels. For every instance they enumerate the full search space (2^N configurations), apply both best‑improvement and first‑improvement hill‑climbers to locate all local optima, and construct the corresponding LONs. Basins are defined as the set of starting solutions that converge to a particular optimum under the respective algorithm; transition probabilities are estimated by counting how often a random start in basin A ends in basin B after a single move.

The analysis focuses on several network metrics: average degree, clustering coefficient, average shortest‑path length, basin‑size distribution, correlation between basin size and fitness, and spectral properties of the transition matrix. Key findings include:

  1. Higher connectivity for first‑improvement LONs – average degree is roughly 30 % larger, indicating more frequent basin crossings because the greedy rule stops at the first improving neighbor rather than searching for the best.
  2. Lower clustering – first‑improvement networks are less locally clustered, suggesting a flatter, more “random‑like” topology, whereas best‑improvement networks exhibit tighter clusters of basins.
  3. Shorter average path lengths – the greedy networks have smaller average shortest‑path distances, implying that a random walk can reach any optimum in fewer steps.
  4. Distinct basin‑size patterns – best‑improvement produces a few very large basins that dominate the landscape, while first‑improvement yields many smaller basins, leading to a heavy‑tailed distribution.
  5. Weaker size‑fitness correlation – both models show a positive correlation between basin size and optimum fitness, but the correlation coefficient is about 0.15 lower for first‑improvement, meaning large basins are less predictive of high fitness under the greedy rule.
  6. Spectral differences – the eigenvalue spectrum of the transition matrix for best‑improvement is dominated by a few large eigenvalues, indicating faster convergence of the associated Markov chain; first‑improvement spectra are more spread out, reflecting longer mixing times.

These structural differences have direct algorithmic implications. Greedy‑ascent heuristics tend to explore the landscape more uniformly and can escape shallow traps quickly, but they may waste effort on many small basins and have a reduced ability to concentrate search on high‑quality regions. Steepest‑ascent heuristics, by contrast, quickly funnel the search into a few large, high‑fitness basins, at the risk of premature convergence. The authors suggest hybrid strategies—starting with first‑improvement to gain diversity and later switching to best‑improvement for exploitation—as a promising direction.

The paper concludes that LONs provide a powerful quantitative lens for comparing local search dynamics. By exposing how different move‑selection policies reshape basin topology and transition probabilities, LON analysis can guide the design of more effective metaheuristics for NK landscapes and, by extension, other combinatorial problems such as TSP or SAT. Future work is proposed on extending the framework to alternative neighborhood structures, dynamic basins, and adaptive algorithms that modify their improvement rule on‑the‑fly.