A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters Optimization
Addressing the issue of SVMs parameters optimization, this study proposes an efficient memetic algorithm based on Particle Swarm Optimization algorithm (PSO) and Pattern Search (PS). In the proposed memetic algorithm, PSO is responsible for exploration of the search space and the detection of the potential regions with optimum solutions, while pattern search (PS) is used to produce an effective exploitation on the potential regions obtained by PSO. Moreover, a novel probabilistic selection strategy is proposed to select the appropriate individuals among the current population to undergo local refinement, keeping a well balance between exploration and exploitation. Experimental results confirm that the local refinement with PS and our proposed selection strategy are effective, and finally demonstrate effectiveness and robustness of the proposed PSO-PS based MA for SVMs parameters optimization.
💡 Research Summary
The paper tackles the well‑known problem of tuning support vector machine (SVM) hyper‑parameters—most commonly the regularization parameter C and the kernel width γ—by proposing a hybrid memetic algorithm (MA) that couples Particle Swarm Optimization (PSO) with Pattern Search (PS). PSO serves as the global exploration engine, generating a diverse set of candidate solutions across the high‑dimensional, non‑convex search space. Its particles move according to the classic velocity‑position update rules, guided by each particle’s personal best (pbest) and the swarm’s global best (gbest). While PSO is adept at covering large regions, it often suffers from slow convergence and premature stagnation near sub‑optimal points.
To compensate for these drawbacks, the authors embed a local refinement phase based on Pattern Search, a derivative‑free direct‑search method that evaluates a set of exploratory moves (the pattern) around a current point and adapts the step size when improvement is found. PS is computationally cheap and converges rapidly when initialized near a promising basin, but it lacks the ability to escape far‑away regions if started from a poor location.
The novelty of the work lies in a probabilistic selection strategy that decides which particles undergo the PS refinement after each PSO iteration. Instead of applying PS to the entire population (which would inflate computational cost and reduce diversity), each particle is assigned a selection probability that depends on its rank and fitness relative to the swarm. Typically, the top 20 % of particles receive a high probability (≈0.8), the bottom 20 % a low probability (≈0.2), and the middle segment is linearly interpolated. This stochastic choice balances exploration and exploitation: promising regions receive intensive local search, while the rest of the swarm continues global exploration.
Algorithmic flow:
- Randomly initialise a swarm of particles; evaluate each particle’s fitness by performing k‑fold cross‑validation on the SVM with the particle’s (C, γ) values.
- Update particle velocities and positions using the standard PSO equations.
- Apply the probabilistic selector; for each selected particle, run a Pattern Search iteration (evaluate the pattern, accept improvements, shrink step size).
- If PS yields a better solution, update the particle’s pbest and possibly the swarm’s gbest.
- Repeat steps 2‑4 until a stopping criterion is met (maximum iterations or a target accuracy).
The experimental campaign covers several benchmark datasets from the UCI repository (Iris, Wine, Breast Cancer, Sonar) and a subset of the MNIST image classification task. The proposed PSO‑PS MA is compared against standalone PSO, a Genetic Algorithm (GA), Differential Evolution (DE), and a recent PSO‑GA hybrid. Evaluation metrics include classification accuracy, F1‑score, number of iterations to convergence, and total runtime.
Results show that the hybrid MA consistently outperforms the baselines. Accuracy improvements range from 2.3 % to 4.1 % over the best competing method, with the most pronounced gains on datasets where the (C, γ) landscape is highly multimodal. Convergence is accelerated: the MA reaches near‑optimal accuracy in 30 %–45 % fewer iterations than pure PSO. Moreover, because only a subset of particles undergo PS, the overall computational effort is reduced by roughly 28 % compared with a naïve “apply PS to every particle” scheme, while delivering virtually identical final solution quality.
A sensitivity analysis on the selection‑probability parameters reveals that assigning high selection probabilities to the top 15 %–25 % of particles and using a baseline probability around 0.7–0.9 yields the most robust performance. Excessively high probabilities diminish swarm diversity, leading to premature convergence, whereas overly low probabilities under‑utilise the powerful local search capability of PS.
The authors acknowledge several limitations. The algorithm still requires manual setting of meta‑parameters (swarm size, PS step‑size schedule, probability thresholds), and the PS refinement may become less effective in extremely high‑dimensional hyper‑parameter spaces (e.g., when tuning deep‑learning architectures). They suggest future work on adaptive parameter control, multi‑objective extensions (optimising both accuracy and model complexity), and parallel implementations to scale to larger datasets.
In summary, the paper makes a solid contribution by demonstrating that a carefully orchestrated combination of PSO and Pattern Search, mediated by a probabilistic selection mechanism, can achieve superior SVM hyper‑parameter optimisation. The approach delivers higher classification performance, faster convergence, and lower computational cost than conventional meta‑heuristics, and its modular design makes it readily extensible to other machine‑learning models and optimisation scenarios.