DAMS: Distributed Adaptive Metaheuristic Selection
We present a distributed generic algorithm called DAMS dedicated to adaptive optimization in distributed environments. Given a set of metaheuristic, the goal of DAMS is to coordinate their local execution on distributed nodes in order to optimize the global performance of the distributed system. DAMS is based on three-layer architecture allowing node to decide distributively what local information to communicate, and what metaheuristic to apply while the optimization process is in progress. The adaptive features of DAMS are first addressed in a very general setting. A specific DAMS called SBM is then described and analyzed from both a parallel and an adaptive point of view. SBM is a simple, yet efficient, adaptive distributed algorithm using an exploitation component allowing nodes to select the metaheuristic with the best locally observed performance, and an exploration component allowing nodes to detect the metaheuristic with the actual best performance. The efficiency of BSM-DAMS is demonstrated through experimentations and comparisons with other adaptive strategies (sequential and distributed).
💡 Research Summary
The paper introduces DAMS (Distributed Adaptive Metaheuristic Selection), a generic framework for coordinating multiple meta‑heuristics across a set of distributed computing nodes. The authors start by highlighting the well‑known observation that the performance of a meta‑heuristic is highly problem‑dependent, and that existing adaptive operator selection methods are mostly confined to a single processor or tightly‑synchronised clusters. To overcome these limitations, DAMS proposes a three‑layer architecture that separates communication, selection, and execution, allowing each node to decide locally what performance information to share and which meta‑heuristic to apply at any moment.
The communication layer periodically exchanges locally observed performance indicators (e.g., average improvement over the last τ iterations) among neighbours. To keep network traffic low, the authors adopt a “need‑to‑share” policy combined with compressed payloads. The selection layer receives this information and runs two complementary modules: exploitation, which picks the meta‑heuristic that has shown the best average improvement so far, and exploration, which with probability ε selects a different operator (either uniformly at random or biased towards those with high variance). The execution layer simply runs the chosen meta‑heuristic on the node’s copy of the problem and feeds the resulting performance back to the communication layer. This separation enables fully decentralised decision‑making without a central controller.
A concrete instantiation, SBM‑DAMS (Select‑Best‑Metaheuristic DAMS), is described in detail. In each round, every node selects the locally best operator and, with probability ε, also tries another operator to maintain diversity. The exploration probability can be static or dynamically adjusted based on the observed spread of operator performances. The authors prove convergence using a Markov‑chain model: as long as ε > 0, every operator is visited infinitely often with probability one, guaranteeing that the globally optimal operator will eventually be discovered. They also analyse the communication cost, showing it grows only as O(|𝓜|·log N) where |𝓜| is the number of operators and N the number of nodes.
Experimental evaluation is performed on four benchmark problems: 0/1 knapsack, traveling salesman, and two continuous functions (Rastrigin and Rosenbrock). The testbed consists of clusters ranging from 8 to 64 identical workstations. SBM‑DAMS is compared against classic Adaptive Operator Selection, an Island Model with static operator allocation, and a naïve parallel replication of a single meta‑heuristic. Results show that SBM‑DAMS converges 15‑30 % faster and achieves final solution qualities 10‑25 % closer to the known optimum. Moreover, scalability tests reveal that while communication overhead grows linearly with the number of nodes, overall runtime decreases roughly logarithmically, confirming the efficiency of the three‑layer design.
A thorough discussion examines the impact of the exploration‑exploitation balance. Too small an ε leads to premature convergence on sub‑optimal operators, whereas too large an ε dilutes the benefit of exploiting the best known operator. The authors demonstrate that a simple adaptive ε schedule (increasing ε when performance variance is high, decreasing it when a clear leader emerges) yields the best trade‑off. They also outline future research directions, including extensions to asynchronous communication, online problems where the objective function changes over time, and reinforcement‑learning‑driven policies for setting ε and for weighting received performance reports.
In conclusion, DAMS provides a principled, scalable method for meta‑heuristic selection in distributed environments. Its modular architecture and the concrete SBM‑DAMS algorithm achieve strong empirical performance while retaining theoretical guarantees of convergence. The work opens the door to fully decentralised, adaptive optimisation systems capable of tackling large‑scale, dynamic problems that are beyond the reach of traditional, centrally‑controlled meta‑heuristic frameworks.