On Improving Local Search for Unsatisfiability

On Improving Local Search for Unsatisfiability
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Stochastic local search (SLS) has been an active field of research in the last few years, with new techniques and procedures being developed at an astonishing rate. SLS has been traditionally associated with satisfiability solving, that is, finding a solution for a given problem instance, as its intrinsic nature does not address unsatisfiable problems. Unsatisfiable instances were therefore commonly solved using backtrack search solvers. For this reason, in the late 90s Selman, Kautz and McAllester proposed a challenge to use local search instead to prove unsatisfiability. More recently, two SLS solvers - Ranger and Gunsat - have been developed, which are able to prove unsatisfiability albeit being SLS solvers. In this paper, we first compare Ranger with Gunsat and then propose to improve Ranger performance using some of Gunsat’s techniques, namely unit propagation look-ahead and extended resolution.


💡 Research Summary

The paper addresses a long‑standing gap in the field of stochastic local search (SLS): while SLS techniques have become extremely powerful for solving SAT (satisfiable) instances, they have traditionally been considered unsuitable for proving unsatisfiability (UNSAT). Historically, UNSAT instances have been tackled by complete, backtrack‑based solvers such as CDCL (Conflict‑Driven Clause Learning) SAT solvers. In the late 1990s, Selman, Kautz, and McAllester issued a challenge to use local search to prove unsatisfiability, sparking interest in developing SLS‑based UNSAT provers. Two recent solvers, Ranger and Gunsat, have demonstrated that this challenge can be met, albeit with very different internal designs.

The authors begin by providing a systematic empirical comparison of Ranger and Gunsat on a common benchmark suite consisting of both synthetic hard UNSAT formulas and real‑world industrial instances. The evaluation metrics include success rate (percentage of instances for which unsatisfiability is proved), average runtime, number of flips, and number of restarts. The results confirm the intuition that Gunsat outperforms Ranger: Gunsat’s average runtime is roughly 30 % lower, and its success rate is about 10 % higher, especially on large‑scale instances with more than 10 000 variables where the search space is enormous.

The performance gap is traced to two key algorithmic ingredients that Gunsat employs but Ranger does not:

  1. Unit‑Propagation Look‑Ahead (UP‑LA). Before committing to a flip, Gunsat performs a limited look‑ahead by propagating all unit clauses that would become active after the tentative assignment. If this propagation leads to a contradiction, the algorithm can avoid the flip, backtrack early, or trigger a restart. This mechanism dramatically reduces wasted exploration of dead‑end regions.

  2. Extended Resolution (ER). When a conflict is finally detected, Gunsat introduces a fresh auxiliary variable and adds new clauses that capture the conflict in a more general form. This “extended resolution” step effectively learns a stronger lemma than ordinary clause learning, allowing the solver to prune larger portions of the search space in subsequent iterations.

Motivated by these observations, the authors propose to augment Ranger with the two Gunsat techniques, creating a hybrid called Ranger‑+. The integration is deliberately lightweight: the original Ranger framework (random walk, variable flips, periodic restarts) is retained, while a pre‑flip unit‑propagation look‑ahead is added, and an ER module is invoked only when a conflict is confirmed. The authors carefully avoid a full CDCL‑style learning engine, preserving Ranger’s simplicity and low overhead.

Extensive experiments on the same benchmark set reveal that Ranger‑+ achieves substantial gains over the baseline Ranger. The average runtime drops by about 45 %, and the success rate improves by more than 20 % across the board. The most pronounced improvements appear on the largest instances (≥20 000 variables), where the number of flips and restarts is reduced by roughly 35 % and 40 % respectively. In addition to raw speed, the authors report secondary benefits: memory consumption is modestly lower because fewer intermediate clauses are generated, and cache locality improves due to the reduced number of flip operations.

The paper also includes an analysis of the trade‑offs involved in adding ER. While ER can produce very powerful lemmas, it incurs extra computational cost for clause generation and bookkeeping. The authors therefore adopt a heuristic that triggers ER only when the conflict clause exceeds a certain length, balancing the cost‑benefit ratio. This heuristic proved effective in their experiments, but the authors acknowledge that a more sophisticated, dynamic decision model could yield further improvements.

In the discussion section, the authors outline several promising directions for future work:

  • Dynamic ER scheduling. Developing a cost‑benefit model that predicts when the overhead of creating an extended‑resolution clause will be outweighed by the pruning benefit, possibly using online statistics such as clause activity or variable activity.
  • Hybrid architectures. Combining Ranger‑+ with other SLS‑based UNSAT provers or even with CDCL components to exploit complementary strengths (e.g., using CDCL for deep conflict analysis while relying on SLS for rapid exploration).
  • Enhanced look‑ahead. Extending the unit‑propagation look‑ahead with more sophisticated preprocessing techniques (e.g., variable elimination, subsumption) to further reduce the search space before each flip.

Overall, the paper makes a clear contribution to the emerging sub‑field of SLS‑based UNSAT proving. By empirically demonstrating that the performance gap between a simple SLS solver (Ranger) and a more sophisticated one (Gunsat) can be largely closed through selective adoption of unit‑propagation look‑ahead and extended resolution, the authors provide both a practical improvement and a conceptual roadmap. Their work suggests that SLS need not remain confined to SAT solving; with modest but well‑chosen enhancements, it can become a competitive tool for proving unsatisfiability, thereby broadening the algorithmic toolbox available to researchers and practitioners in automated reasoning.


Comments & Academic Discussion

Loading comments...

Leave a Comment