ParamILS: An Automatic Algorithm Configuration Framework
The identification of performance-optimizing parameter settings is an important part of the development and application of algorithms. We describe an automatic framework for this algorithm configuration problem. More formally, we provide methods for optimizing a target algorithm’s performance on a given class of problem instances by varying a set of ordinal and/or categorical parameters. We review a family of local-search-based algorithm configuration procedures and present novel techniques for accelerating them by adaptively limiting the time spent for evaluating individual configurations. We describe the results of a comprehensive experimental evaluation of our methods, based on the configuration of prominent complete and incomplete algorithms for SAT. We also present what is, to our knowledge, the first published work on automatically configuring the CPLEX mixed integer programming solver. All the algorithms we considered had default parameter settings that were manually identified with considerable effort. Nevertheless, using our automated algorithm configuration procedures, we achieved substantial and consistent performance improvements.
💡 Research Summary
The paper tackles the long‑standing challenge of algorithm parameter tuning by introducing ParamILS, a fully automated framework that searches for high‑performing configurations of a target algorithm on a given set of problem instances. The authors formalize the configuration problem as the optimization of a black‑box algorithm’s performance with respect to a discrete space of ordinal and categorical parameters. Their solution builds on Iterated Local Search (ILS), a meta‑heuristic that repeatedly performs local moves, accepts improvements, and periodically perturbs the current solution to escape local optima.
Two major technical contributions distinguish ParamILS from earlier ILS‑based configurators. First, the authors propose adaptive capping, a dynamic runtime‑budget mechanism that stops evaluating a candidate configuration as soon as it can be proven to be worse than the best configuration found so far. By comparing partial sums of runtimes across the training instance set to the current incumbent’s total runtime, the method avoids wasting time on clearly inferior settings while preserving statistical soundness. Second, they introduce a focused neighbourhood generation strategy. Instead of flipping a randomly chosen parameter at each step (the “basic ILS”), the focused variant ranks parameters by their observed impact on performance (e.g., via sensitivity analysis or recent improvement history) and preferentially modifies the most influential ones. This yields faster convergence, especially in high‑dimensional parameter spaces.
The experimental evaluation is extensive. The authors configure three state‑of‑the‑art SAT solvers—MiniSat (complete), Glucose (complete), and Lingeling (incomplete)—and the IBM CPLEX mixed‑integer programming (MIP) solver. Each target algorithm possesses between ten and thirty user‑exposed parameters that were originally set by domain experts after considerable manual effort. For each algorithm, a benchmark suite of SAT or MIP instances is split into training and validation sets; ParamILS runs on the training set while the validation set is used to guard against over‑fitting.
Results show dramatic performance gains. On the SAT benchmarks, the automatically tuned MiniSat and Glucose configurations achieve average runtime reductions of 2×–5× compared with the default settings, and they solve a larger fraction of the hardest instances within the imposed time limit. Lingeling, which has a particularly large and interdependent parameter space, benefits from the focused ILS variant, yielding a consistent 30%+ speed‑up. For CPLEX, the tuned configurations reduce solution times by 20%–35% on a majority of the MIP instances, and in several large‑scale cases the solver succeeds within the time cap where the default configuration fails. Importantly, the tuned settings retain their advantage when transferred to unseen validation instances, indicating that ParamILS avoids over‑specialization.
The paper also discusses methodological considerations. Adaptive capping is shown to cut the total wall‑clock time spent on configuration by up to 70% without sacrificing solution quality, making the approach practical for expensive target algorithms. The focused neighbourhood mechanism is especially beneficial when the number of parameters exceeds 15, as it concentrates search effort on the most promising dimensions. The authors acknowledge limitations: ParamILS currently handles only discrete (ordinal or categorical) parameters, and extending it to continuous domains would require discretization or hybrid search strategies. Moreover, the framework optimizes a single objective (typically runtime); multi‑objective extensions to balance memory consumption, solution quality, or robustness are left for future work.
In conclusion, ParamILS demonstrates that systematic, automated configuration can outperform painstaking manual tuning across diverse algorithm families. By coupling a robust local‑search backbone with adaptive evaluation budgeting, the framework delivers both efficiency and effectiveness. The authors argue that the adaptive capping idea is generic and can be incorporated into other configurators such as SMAC or Bayesian Optimization, potentially shaping the next generation of fully automated algorithm design pipelines.